← Back to team overview

cloud-init-dev team mailing list archive

[Merge] ~chad.smith/cloud-init:ubuntu/zesty into cloud-init:ubuntu/zesty

 

Chad Smith has proposed merging ~chad.smith/cloud-init:ubuntu/zesty into cloud-init:ubuntu/zesty.

Requested reviews:
  cloud-init commiters (cloud-init-dev)
Related bugs:
  Bug #1717969 in cloud-init: "Exhausting the task limit"
  https://bugs.launchpad.net/cloud-init/+bug/1717969
  Bug #1718675 in cloud-init: "It should be possible to add repos in SUSE distros"
  https://bugs.launchpad.net/cloud-init/+bug/1718675
  Bug #1721157 in cloud-init: "netplan render drops bridge_stp setting"
  https://bugs.launchpad.net/cloud-init/+bug/1721157

For more details, see:
https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/331975

Upstream snapshot pulled into Zesty for SRU
-- 
The attached diff has been truncated due to its size.
Your team cloud-init commiters is requested to review the proposed merge of ~chad.smith/cloud-init:ubuntu/zesty into cloud-init:ubuntu/zesty.
diff --git a/ChangeLog b/ChangeLog
index 80405bc..0260c57 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,425 @@
+17.1:
+ - doc: document GCE datasource. [Arnd Hannemann]
+ - suse: updates to templates to support openSUSE and SLES.
+   [Robert Schweikert] (LP: #1718640)
+ - suse: Copy sysvinit files from redhat with slight changes.
+   [Robert Schweikert] (LP: #1718649)
+ - docs: fix sphinx module schema documentation [Chad Smith]
+ - tests: Add cloudinit package to all test targets [Chad Smith]
+ - Makefile: No longer look for yaml files in obsolete ./bin/.
+ - tests: fix ds-identify unit tests to set EC2_STRICT_ID_DEFAULT.
+ - ec2: Fix maybe_perform_dhcp_discovery to use /var/tmp as a tmpdir
+   [Chad Smith] (LP: #1717627)
+ - Azure: wait longer for SSH pub keys to arrive.
+   [Paul Meyer] (LP: #1717611)
+ - GCE: Fix usage of user-data. (LP: #1717598)
+ - cmdline: add collect-logs subcommand. [Chad Smith] (LP: #1607345)
+ - CloudStack: consider dhclient lease files named with a hyphen.
+   (LP: #1717147)
+ - resizefs: Drop check for read-only device file, do not warn on
+   overlayroot. [Chad Smith]
+ - Do not provide systemd-fsck drop-in which could cause ordering cycles.
+   [Balint Reczey] (LP: #1717477)
+ - tests: Enable the NoCloud KVM platform [Joshua Powers]
+ - resizefs: pass mount point to xfs_growfs [Dusty Mabe]
+ - vmware: Enable nics before sending the SUCCESS event. [Sankar Tanguturi]
+ - cloud-config modules: honor distros definitions in each module
+   [Chad Smith] (LP: #1715738, #1715690)
+ - chef: Add option to pin chef omnibus install version
+   [Ethan Apodaca] (LP: #1462693)
+ - tests: execute: support command as string [Joshua Powers]
+ - schema and docs: Add jsonschema to resizefs and bootcmd modules
+   [Chad Smith]
+ - tools: Add xkvm script, wrapper around qemu-system [Joshua Powers]
+ - vmware customization: return network config format
+   [Sankar Tanguturi] (LP: #1675063)
+ - Ec2: only attempt to operate at local mode on known platforms.
+   (LP: #1715128)
+ - Use /run/cloud-init for tempfile operations. (LP: #1707222)
+ - ds-identify: Make OpenStack return maybe on arch other than intel.
+   (LP: #1715241)
+ - tests: mock missed openstack metadata uri network_data.json
+   [Chad Smith] (LP: #1714376)
+ - relocate tests/unittests/helpers.py to cloudinit/tests
+   [Lars Kellogg-Stedman]
+ - tox: add nose timer output [Joshua Powers]
+ - upstart: do not package upstart jobs, drop ubuntu-init-switch module.
+ - tests: Stop leaking calls through unmocked metadata addresses
+   [Chad Smith] (LP: #1714117)
+ - distro: allow distro to specify a default locale [Ryan Harper]
+ - tests: fix two recently added tests for sles distro.
+ - url_helper: dynamically import oauthlib import from inside oauth_headers
+   [Chad Smith]
+ - tox: make xenial environment run with python3.6
+ - suse: Add support for openSUSE and return SLES to a working state.
+   [Robert Schweikert]
+ - GCE: Add a main to the GCE Datasource.
+ - ec2: Add IPv6 dhcp support to Ec2DataSource. [Chad Smith] (LP: #1639030)
+ - url_helper: fail gracefully if oauthlib is not available
+   [Lars Kellogg-Stedman] (LP: #1713760)
+ - cloud-init analyze: fix issues running under python 2. [Andrew Jorgensen]
+ - Configure logging module to always use UTC time.
+   [Ryan Harper] (LP: #1713158)
+ - Log a helpful message if a user script does not include shebang.
+   [Andrew Jorgensen]
+ - cli: Fix command line parsing of coniditionally loaded subcommands.
+   [Chad Smith] (LP: #1712676)
+ - doc: Explain error behavior in user data include file format.
+   [Jason Butz]
+ - cc_landscape & cc_puppet: Fix six.StringIO use in writing configs
+   [Chad Smith] (LP: #1699282, #1710932)
+ - schema cli: Add schema subcommand to cloud-init cli and cc_runcmd schema
+   [Chad Smith]
+ - Debian: Remove non-free repositories from apt sources template.
+   [Joonas Kylmälä] (LP: #1700091)
+ - tools: Add tooling for basic cloud-init performance analysis.
+   [Chad Smith] (LP: #1709761)
+ - network: add v2 passthrough and fix parsing v2 config with bonds/bridge
+   params [Ryan Harper] (LP: #1709180)
+ - doc: update capabilities with features available, link doc reference,
+   cli example [Ryan Harper]
+ - vcloud directory: Guest Customization support for passwords
+   [Maitreyee Saikia]
+ - ec2: Allow Ec2 to run in init-local using dhclient in a sandbox.
+   [Chad Smith] (LP: #1709772)
+ - cc_ntp: fallback on timesyncd configuration if ntp is not installable
+   [Ryan Harper] (LP: #1686485)
+ - net: Reduce duplicate code. Have get_interfaces_by_mac use
+   get_interfaces.
+ - tests: Fix build tree integration tests [Joshua Powers]
+ - sysconfig: Dont repeat header when rendering resolv.conf
+   [Ryan Harper] (LP: #1701420)
+ - archlinux: Fix bug with empty dns, do not render 'lo' devices.
+   (LP: #1663045, #1706593)
+ - cloudinit.net: add initialize_network_device function and tests
+   [Chad Smith]
+ - makefile: fix ci-deps-ubuntu target [Chad Smith]
+ - tests: adjust locale integration test to parse default locale.
+ - tests: remove 'yakkety' from releases as it is EOL.
+ - tests: Add initial tests for EC2 and improve a docstring.
+ - locale: Do not re-run locale-gen if provided locale is system default.
+ - archlinux: fix set hostname usage of write_file.
+   [Joshua Powers] (LP: #1705306)
+ - sysconfig: support subnet type of 'manual'.
+ - tools/run-centos: make running with no argument show help.
+ - Drop rand_str() usage in DNS redirection detection
+   [Bob Aman] (LP: #1088611)
+ - sysconfig: use MACADDR on bonds/bridges to configure mac_address
+   [Ryan Harper] (LP: #1701417)
+ - net: eni route rendering missed ipv6 default route config
+   [Ryan Harper] (LP: #1701097)
+ - sysconfig: enable mtu set per subnet, including ipv6 mtu
+   [Ryan Harper] (LP: #1702513)
+ - sysconfig: handle manual type subnets [Ryan Harper] (LP: #1687725)
+ - sysconfig: fix ipv6 gateway routes [Ryan Harper] (LP: #1694801)
+ - sysconfig: fix rendering of bond, bridge and vlan types.
+   [Ryan Harper] (LP: #1695092)
+ - Templatize systemd unit files for cross distro deltas. [Ryan Harper]
+ - sysconfig: ipv6 and default gateway fixes. [Ryan Harper] (LP: #1704872)
+ - net: fix renaming of nics to support mac addresses written in upper
+   case. (LP: #1705147)
+ - tests: fixes for issues uncovered when moving to python 3.6.
+   (LP: #1703697)
+ - sysconfig: include GATEWAY value if set in subnet
+   [Ryan Harper] (LP: #1686856)
+ - Scaleway: add datasource with user and vendor data for Scaleway.
+   [Julien Castets]
+ - Support comments in content read by load_shell_content.
+ - cloudinitlocal fail to run during boot [Hongjiang Zhang]
+ - doc: fix disk setup example table_type options
+   [Sandor Zeestraten] (LP: #1703789)
+ - tools: Fix exception handling. [Joonas Kylmälä] (LP: #1701527)
+ - tests: fix usage of mock in GCE test.
+ - test_gce: Fix invalid mock of platform_reports_gce to return False
+   [Chad Smith]
+ - test: fix incorrect keyid for apt repository.
+   [Joshua Powers] (LP: #1702717)
+ - tests: Update version of pylxd [Joshua Powers]
+ - write_files: Remove log from helper function signatures.
+   [Andrew Jorgensen]
+ - doc: document the cmdline options to NoCloud [Brian Candler]
+ - read_dmi_data: always return None when inside a container. (LP: #1701325)
+ - requirements.txt: remove trailing white space.
+ - Azure: Add network-config, Refactor net layer to handle duplicate macs.
+   [Ryan Harper]
+ - Tests: Simplify the check on ssh-import-id [Joshua Powers]
+ - tests: update ntp tests after sntp added [Joshua Powers]
+ - FreeBSD: Make freebsd a variant, fix unittests and
+   tools/build-on-freebsd.
+ - FreeBSD: fix test failure
+ - FreeBSD: replace ifdown/ifup with "ifconfig down" and "ifconfig up".
+   [Hongjiang Zhang] (LP: #1697815)
+ - FreeBSD: fix cdrom mounting failure if /mnt/cdrom/secure did not exist.
+   [Hongjiang Zhang] (LP: #1696295)
+ - main: Don't use templater to format the welcome message
+   [Andrew Jorgensen]
+ - docs: Automatically generate module docs form schema if present.
+   [Chad Smith]
+ - debian: fix path comment in /etc/hosts template.
+   [Jens Sandmann] (LP: #1606406)
+ - suse: add hostname and fully qualified domain to template.
+   [Jens Sandmann]
+ - write_file(s): Print permissions as octal, not decimal [Andrew Jorgensen]
+ - ci deps: Add --test-distro to read-dependencies to install all deps
+   [Chad Smith]
+ - tools/run-centos: cleanups and move to using read-dependencies
+ - pkg build ci: Add make ci-deps-<distro> target to install pkgs
+   [Chad Smith]
+ - systemd: make cloud-final.service run before apt daily services.
+   (LP: #1693361)
+ - selinux: Allow restorecon to be non-fatal. [Ryan Harper] (LP: #1686751)
+ - net: Allow netinfo subprocesses to return 0 or 1.
+   [Ryan Harper] (LP: #1686751)
+ - net: Allow for NetworkManager configuration [Ryan McCabe] (LP: #1693251)
+ - Use distro release version to determine if we use systemd in redhat spec
+   [Ryan Harper]
+ - net: normalize data in network_state object
+ - Integration Testing: tox env, pyxld 2.2.3, and revamp framework
+   [Wesley Wiedenmeier]
+ - Chef: Update omnibus url to chef.io, minor doc changes. [JJ Asghar]
+ - tools: add centos scripts to build and test [Joshua Powers]
+ - Drop cheetah python module as it is not needed by trunk [Ryan Harper]
+ - rhel/centos spec cleanups.
+ - cloud.cfg: move to a template.  setup.py changes along the way.
+ - Makefile: add deb-src and srpm targets. use PYVER more places.
+ - makefile: fix python 2/3 detection in the Makefile [Chad Smith]
+ - snap: Removing snapcraft plug line [Joshua Powers] (LP: #1695333)
+ - RHEL/CentOS: Fix default routes for IPv4/IPv6 configuration.
+   [Andreas Karis] (LP: #1696176)
+ - test: Fix pyflakes complaint of unused import.
+   [Joshua Powers] (LP: #1695918)
+ - NoCloud: support seed of nocloud from smbios information
+   [Vladimir Pouzanov] (LP: #1691772)
+ - net: when selecting a network device, use natural sort order
+   [Marc-Aurèle Brothier]
+ - fix typos and remove whitespace in various docs [Stephan Telling]
+ - systemd: Fix typo in comment in cloud-init.target. [Chen-Han Hsiao]
+ - Tests: Skip jsonschema related unit tests when dependency is absent.
+   [Chad Smith] (LP: #1695318)
+ - azure: remove accidental duplicate line in merge.
+ - azure: identify platform by well known value in chassis asset tag.
+   [Chad Smith] (LP: #1693939)
+ - tools/net-convert.py: support old cloudinit versions by using kwargs.
+ - ntp: Add schema definition and passive schema validation.
+   [Chad Smith] (LP: #1692916)
+ - Fix eni rendering for bridge params that require repeated key for
+   values. [Ryan Harper]
+ - net: remove systemd link file writing from eni renderer [Ryan Harper]
+ - AliYun: Enable platform identification and enable by default.
+   [Junjie Wang] (LP: #1638931)
+ - net: fix reading and rendering addresses in cidr format.
+   [Dimitri John Ledkov] (LP: #1689346, #1684349)
+ - disk_setup: udev settle before attempting partitioning or fs creation.
+   (LP: #1692093)
+ - GCE: Update the attribute used to find instance SSH keys.
+   [Daniel Watkins] (LP: #1693582)
+ - nplan: For bonds, allow dashed or underscore names of keys.
+   [Dimitri John Ledkov] (LP: #1690480)
+ - python2.6: fix unit tests usage of assertNone and format.
+ - test: update docstring on test_configured_list_with_none
+ - fix tools/ds-identify to not write None twice.
+ - tox/build: do not package depend on style requirements.
+ - cc_ntp: Restructure cc_ntp unit tests. [Chad Smith] (LP: #1692794)
+ - flake8: move the pinned version of flake8 up to 3.3.0
+ - tests: Apply workaround for snapd bug in test case. [Joshua Powers]
+ - RHEL/CentOS: Fix dual stack IPv4/IPv6 configuration.
+   [Andreas Karis] (LP: #1679817, #1685534, #1685532)
+ - disk_setup: fix several issues with gpt disk partitions. (LP: #1692087)
+ - function spelling & docstring update [Joshua Powers]
+ - Fixing wrong file name regression. [Joshua Powers]
+ - tox: move pylint target to 1.7.1
+ - Fix get_interfaces_by_mac for empty macs (LP: #1692028)
+ - DigitalOcean: remove routes except for the public interface.
+   [Ben Howard] (LP: #1681531.)
+ - netplan: pass macaddress, when specified, for vlans
+   [Dimitri John Ledkov] (LP: #1690388)
+ - doc: various improvements for the docs on cc_users_groups.
+   [Felix Dreissig]
+ - cc_ntp: write template before installing and add service restart
+   [Ryan Harper] (LP: #1645644)
+ - cloudstack: fix tests to avoid accessing /var/lib/NetworkManager
+   [Lars Kellogg-Stedman]
+ - tests: fix hardcoded path to mkfs.ext4 [Joshua Powers] (LP: #1691517)
+ - Actually skip warnings when .skip file is present.
+   [Chris Brinker] (LP: #1691551)
+ - netplan: fix netplan render_network_state signature.
+   [Dimitri John Ledkov] (LP: #1685944)
+ - Azure: fix reformatting of ephemeral disks on resize to large types.
+   (LP: #1686514)
+ - Revert "tools/net-convert: fix argument order for render_network_state"
+ - make deb: Add devscripts dependency for make deb. Cleanup
+   packages/bddeb. [Chad Smith] (LP: #1685935)
+ - tools/net-convert: fix argument order for render_network_state
+   [Ryan Harper] (LP: #1685944)
+ - openstack: fix log message copy/paste typo in _get_url_settings
+   [Lars Kellogg-Stedman]
+ - unittests: fix unittests run on centos [Joshua Powers]
+ - Improve detection of snappy to include os-release and kernel cmdline.
+   (LP: #1689944)
+ - Add address to config entry generated by _klibc_to_config_entry.
+   [Julien Castets] (LP: #1691135)
+ - sysconfig: Raise ValueError when multiple default gateways are present.
+   [Chad Smith] (LP: #1687485)
+ - FreeBSD: improvements and fixes for use on Azure
+   [Hongjiang Zhang] (LP: #1636345)
+ - Add unit tests for ds-identify, fix Ec2 bug found.
+ - fs_setup: if cmd is specified, use shell interpretation.
+   [Paul Meyer] (LP: #1687712)
+ - doc: document network configuration defaults policy and formats.
+   [Ryan Harper]
+ - Fix name of "uri" key in docs for "cc_apt_configure" module
+   [Felix Dreissig]
+ - tests: Enable artful [Joshua Powers]
+ - nova-lxd: read product_name from environment, not platform.
+   (LP: #1685810)
+ - Fix yum repo config where keys contain array values
+   [Dylan Perry] (LP: #1592150)
+ - template: Update debian backports template [Joshua Powers] (LP: #1627293)
+ - rsyslog: replace ~ with stop [Joshua Powers] (LP: #1367899)
+ - Doc: add additional RTD examples [Joshua Powers] (LP: #1459604)
+ - Fix growpart for some cases when booted with root=PARTUUID.
+   (LP: #1684869)
+ - pylint: update output style to parseable [Joshua Powers]
+ - pylint: fix all logging warnings [Joshua Powers]
+ - CloudStack: Add NetworkManager to list of supported DHCP lease dirs.
+   [Syed]
+ - net: kernel lies about vlans not stealing mac addresses, when they do
+   [Dimitri John Ledkov] (LP: #1682871)
+ - ds-identify: Check correct path for "latest" config drive
+   [Daniel Watkins] (LP: #1673637)
+ - doc: Fix example for resolve.conf configuration.
+   [Jon Grimm] (LP: #1531582)
+ - Fix examples that reference upstream chef repository.
+   [Jon Grimm] (LP: #1678145)
+ - doc: correct grammar and improve clarity in merging documentation.
+   [David Tagatac]
+ - doc: Add missing doc link to snap-config module. [Ryan Harper]
+ - snap: allows for creating cloud-init snap [Joshua Powers]
+ - DigitalOcean: assign IPv4ll address to lowest indexed interface.
+   [Ben Howard]
+ - DigitalOcean: configure all NICs presented in meta-data. [Ben Howard]
+ - Remove (and/or fix) URL shortener references [Jon Grimm] (LP: #1669727)
+ - HACKING.rst: more info on filling out contributors agreement.
+ - util: teach write_file about copy_mode option
+   [Lars Kellogg-Stedman] (LP: #1644064)
+ - DigitalOcean: bind resolvers to loopback interface. [Ben Howard]
+ - tests: fix AltCloud tests to not rely on blkid (LP: #1636531)
+ - OpenStack: add 'dvs' to the list of physical link types. (LP: #1674946)
+ - Fix bug that resulted in an attempt to rename bonds or vlans.
+   (LP: #1669860)
+ - tests: update OpenNebula and Digital Ocean to not rely on host
+   interfaces.
+ - net: in netplan renderer delete known image-builtin content.
+   (LP: #1675576)
+ - doc: correct grammar in capabilities.rst [David Tagatac]
+ - ds-identify: fix detecting of maas datasource. (LP: #1677710)
+ - netplan: remove debugging prints, add debug logging [Ryan Harper]
+ - ds-identify: do not write None twice to datasource_list.
+ - support resizing partition and rootfs on system booted without
+   initramfs. [Steve Langasek] (LP: #1677376)
+ - apt_configure: run only when needed. (LP: #1675185)
+ - OpenStack: identify OpenStack by product 'OpenStack Compute'.
+   (LP: #1675349)
+ - GCE: Search GCE in ds-identify, consider serial number in check.
+   (LP: #1674861)
+ - Add support for setting hashed passwords [Tore S. Lonoy] (LP: #1570325)
+ - Fix filesystem creation when using "partition: auto"
+   [Jonathan Ballet] (LP: #1634678)
+ - ConfigDrive: support reading config drive data from /config-drive.
+   (LP: #1673411)
+ - ds-identify: fix detection of Bigstep datasource. (LP: #1674766)
+ - test: add running of pylint [Joshua Powers]
+ - ds-identify: fix bug where filename expansion was left on.
+ - advertise network config v2 support (NETWORK_CONFIG_V2) in features.
+ - Bigstep: fix bug when executing in python3. [root]
+ - Fix unit test when running in a system deployed with cloud-init.
+ - Bounce network interface for Azure when using the built-in path.
+   [Brent Baude] (LP: #1674685)
+ - cloudinit.net: add network config v2 parsing and rendering [Ryan Harper]
+ - net: Fix incorrect call to isfile [Joshua Powers] (LP: #1674317)
+ - net: add renderers for automatically selecting the renderer.
+ - doc: fix config drive doc with regard to unpartitioned disks.
+   (LP: #1673818)
+ - test: Adding integratiron test for password as list [Joshua Powers]
+ - render_network_state: switch arguments around, do not require target
+ - support 'loopback' as a device type.
+ - Integration Testing: improve testcase subclassing [Wesley Wiedenmeier]
+ - gitignore: adding doc/rtd_html [Joshua Powers]
+ - doc: add instructions for running integration tests via tox.
+   [Joshua Powers]
+ - test: avoid differences in 'date' output due to daylight savings.
+ - Fix chef config module in omnibus install. [Jeremy Melvin] (LP: #1583837)
+ - Add feature flags to cloudinit.version. [Wesley Wiedenmeier]
+ - tox: add a citest environment
+ - Further fix regression to support 'password' for default user.
+ - fix regression when no chpasswd/list was provided.
+ - Support chpasswd/list being a list in addition to a string.
+   [Sergio Lystopad] (LP: #1665694)
+ - doc: Fix configuration example for cc_set_passwords module.
+   [Sergio Lystopad] (LP: #1665773)
+ - net: support both ipv4 and ipv6 gateways in sysconfig.
+   [Lars Kellogg-Stedman] (LP: #1669504)
+ - net: do not raise exception for > 3 nameservers
+   [Lars Kellogg-Stedman] (LP: #1670052)
+ - ds-identify: report cleanups for config and exit value. (LP: #1669949)
+ - ds-identify: move default setting for Ec2/strict_id to a global.
+ - ds-identify: record not found in cloud.cfg and always add None.
+ - Support warning if the used datasource is not in ds-identify's list.
+ - tools/ds-identify: make report mode write namespaced results.
+ - Move warning functionality to cloudinit/warnings.py
+ - Add profile.d script for showing warnings on login.
+ - Z99-cloud-locale-test.sh: install and make consistent.
+ - tools/ds-identify: look at cloud.cfg when looking for ec2 strict_id.
+ - tools/ds-identify: disable vmware_guest_customization by default.
+ - tools/ds-identify: ovf identify vmware guest customization.
+ - Identify Brightbox as an Ec2 datasource user. (LP: #1661693)
+ - DatasourceEc2: add warning message when not on AWS.
+ - ds-identify: add reading of datasource/Ec2/strict_id
+ - tools/ds-identify: add support for found or maybe contributing config.
+ - tools/ds-identify: read the seed directory on Ec2
+ - tools/ds-identify: use quotes in local declarations.
+ - tools/ds-identify: fix documentation of policy setting in a comment.
+ - ds-identify: only run once per boot unless --force is given.
+ - flake8: fix flake8 complaints in previous commit.
+ - net: correct errors in cloudinit/net/sysconfig.py
+   [Lars Kellogg-Stedman] (LP: #1665441)
+ - ec2_utils: fix MetadataLeafDecoder that returned bytes on empty
+ - apply the runtime configuration written by ds-identify.
+ - ds-identify: fix checking for filesystem label (LP: #1663735)
+ - ds-identify: read ds=nocloud properly (LP: #1663723)
+ - support nova-lxd by reading platform from environment of pid 1.
+   (LP: #1661797)
+ - ds-identify: change aarch64 to use the default for non-dmi systems.
+ - Remove style checking during build and add latest style checks to tox
+   [Joshua Powers] (LP: #1652329)
+ - code-style: make master pass pycodestyle (2.3.1) cleanly, currently:
+   [Joshua Powers]
+ - manual_cache_clean: When manually cleaning touch a file in instance dir.
+ - Add tools/ds-identify to identify datasources available.
+ - Fix small typo and change iso-filename for consistency [Robin Naundorf]
+ - Fix eni rendering of multiple IPs per interface
+   [Ryan Harper] (LP: #1657940)
+ - tools/mock-meta: support python2 or python3 and ipv6 in both.
+ - tests: remove executable bit on test_net, so it runs, and fix it.
+ - tests: No longer monkey patch httpretty for python 3.4.2
+ - Add 3 ecdsa-sha2-nistp* ssh key types now that they are standardized
+   [Lars Kellogg-Stedman] (LP: #1658174)
+ - reset httppretty for each test [Lars Kellogg-Stedman] (LP: #1658200)
+ - build: fix running Make on a branch with tags other than master
+ - EC2: Do not cache security credentials on disk
+   [Andrew Jorgensen] (LP: #1638312)
+ - doc: Fix typos and clarify some aspects of the part-handler
+   [Erik M. Bray]
+ - doc: add some documentation on OpenStack datasource.
+ - OpenStack: Use timeout and retries from config in get_data.
+   [Lars Kellogg-Stedman] (LP: #1657130)
+ - Fixed Misc issues related to VMware customization. [Sankar Tanguturi]
+ - Fix minor docs typo: perserve > preserve [Jeremy Bicha]
+ - Use dnf instead of yum when available
+   [Lars Kellogg-Stedman] (LP: #1647118)
+ - validate-yaml: use python rather than explicitly python3
+ - Get early logging logged, including failures of cmdline url.
+
 0.7.9:
  - doc: adjust headers in tests documentation for consistency.
  - pep8: fix issue found in zesty build with pycodestyle.
diff --git a/Makefile b/Makefile
index f280911..4ace227 100644
--- a/Makefile
+++ b/Makefile
@@ -4,7 +4,7 @@ PYVER ?= $(shell for p in python3 python2; do \
 
 noseopts ?= -v
 
-YAML_FILES=$(shell find cloudinit bin tests tools -name "*.yaml" -type f )
+YAML_FILES=$(shell find cloudinit tests tools -name "*.yaml" -type f )
 YAML_FILES+=$(shell find doc/examples -name "cloud-config*.txt" -type f )
 
 PIP_INSTALL := pip install
@@ -48,10 +48,10 @@ pyflakes3:
 	@$(CWD)/tools/run-pyflakes3
 
 unittest: clean_pyc
-	nosetests $(noseopts) tests/unittests
+	nosetests $(noseopts) tests/unittests cloudinit
 
 unittest3: clean_pyc
-	nosetests3 $(noseopts) tests/unittests
+	nosetests3 $(noseopts) tests/unittests cloudinit
 
 ci-deps-ubuntu:
 	@$(PYVER) $(CWD)/tools/read-dependencies --distro ubuntu --test-distro
diff --git a/cloudinit/analyze/__init__.py b/cloudinit/analyze/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/cloudinit/analyze/__init__.py
diff --git a/cloudinit/analyze/__main__.py b/cloudinit/analyze/__main__.py
new file mode 100644
index 0000000..69b9e43
--- /dev/null
+++ b/cloudinit/analyze/__main__.py
@@ -0,0 +1,155 @@
+# Copyright (C) 2017 Canonical Ltd.
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import argparse
+import re
+import sys
+
+from . import dump
+from . import show
+
+
+def get_parser(parser=None):
+    if not parser:
+        parser = argparse.ArgumentParser(
+            prog='cloudinit-analyze',
+            description='Devel tool: Analyze cloud-init logs and data')
+    subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand')
+    subparsers.required = True
+
+    parser_blame = subparsers.add_parser(
+        'blame', help='Print list of executed stages ordered by time to init')
+    parser_blame.add_argument(
+        '-i', '--infile', action='store', dest='infile',
+        default='/var/log/cloud-init.log',
+        help='specify where to read input.')
+    parser_blame.add_argument(
+        '-o', '--outfile', action='store', dest='outfile', default='-',
+        help='specify where to write output. ')
+    parser_blame.set_defaults(action=('blame', analyze_blame))
+
+    parser_show = subparsers.add_parser(
+        'show', help='Print list of in-order events during execution')
+    parser_show.add_argument('-f', '--format', action='store',
+                             dest='print_format', default='%I%D @%Es +%ds',
+                             help='specify formatting of output.')
+    parser_show.add_argument('-i', '--infile', action='store',
+                             dest='infile', default='/var/log/cloud-init.log',
+                             help='specify where to read input.')
+    parser_show.add_argument('-o', '--outfile', action='store',
+                             dest='outfile', default='-',
+                             help='specify where to write output.')
+    parser_show.set_defaults(action=('show', analyze_show))
+    parser_dump = subparsers.add_parser(
+        'dump', help='Dump cloud-init events in JSON format')
+    parser_dump.add_argument('-i', '--infile', action='store',
+                             dest='infile', default='/var/log/cloud-init.log',
+                             help='specify where to read input. ')
+    parser_dump.add_argument('-o', '--outfile', action='store',
+                             dest='outfile', default='-',
+                             help='specify where to write output. ')
+    parser_dump.set_defaults(action=('dump', analyze_dump))
+    return parser
+
+
+def analyze_blame(name, args):
+    """Report a list of records sorted by largest time delta.
+
+    For example:
+      30.210s (init-local) searching for datasource
+       8.706s (init-network) reading and applying user-data
+        166ms (modules-config) ....
+        807us (modules-final) ...
+
+    We generate event records parsing cloud-init logs, formatting the output
+    and sorting by record data ('delta')
+    """
+    (infh, outfh) = configure_io(args)
+    blame_format = '     %ds (%n)'
+    r = re.compile('(^\s+\d+\.\d+)', re.MULTILINE)
+    for idx, record in enumerate(show.show_events(_get_events(infh),
+                                                  blame_format)):
+        srecs = sorted(filter(r.match, record), reverse=True)
+        outfh.write('-- Boot Record %02d --\n' % (idx + 1))
+        outfh.write('\n'.join(srecs) + '\n')
+        outfh.write('\n')
+    outfh.write('%d boot records analyzed\n' % (idx + 1))
+
+
+def analyze_show(name, args):
+    """Generate output records using the 'standard' format to printing events.
+
+    Example output follows:
+        Starting stage: (init-local)
+          ...
+        Finished stage: (init-local) 0.105195 seconds
+
+        Starting stage: (init-network)
+          ...
+        Finished stage: (init-network) 0.339024 seconds
+
+        Starting stage: (modules-config)
+          ...
+        Finished stage: (modules-config) 0.NNN seconds
+
+        Starting stage: (modules-final)
+          ...
+        Finished stage: (modules-final) 0.NNN seconds
+    """
+    (infh, outfh) = configure_io(args)
+    for idx, record in enumerate(show.show_events(_get_events(infh),
+                                                  args.print_format)):
+        outfh.write('-- Boot Record %02d --\n' % (idx + 1))
+        outfh.write('The total time elapsed since completing an event is'
+                    ' printed after the "@" character.\n')
+        outfh.write('The time the event takes is printed after the "+" '
+                    'character.\n\n')
+        outfh.write('\n'.join(record) + '\n')
+    outfh.write('%d boot records analyzed\n' % (idx + 1))
+
+
+def analyze_dump(name, args):
+    """Dump cloud-init events in json format"""
+    (infh, outfh) = configure_io(args)
+    outfh.write(dump.json_dumps(_get_events(infh)) + '\n')
+
+
+def _get_events(infile):
+    rawdata = None
+    events, rawdata = show.load_events(infile, None)
+    if not events:
+        events, _ = dump.dump_events(rawdata=rawdata)
+    return events
+
+
+def configure_io(args):
+    """Common parsing and setup of input/output files"""
+    if args.infile == '-':
+        infh = sys.stdin
+    else:
+        try:
+            infh = open(args.infile, 'r')
+        except OSError:
+            sys.stderr.write('Cannot open file %s\n' % args.infile)
+            sys.exit(1)
+
+    if args.outfile == '-':
+        outfh = sys.stdout
+    else:
+        try:
+            outfh = open(args.outfile, 'w')
+        except OSError:
+            sys.stderr.write('Cannot open file %s\n' % args.outfile)
+            sys.exit(1)
+
+    return (infh, outfh)
+
+
+if __name__ == '__main__':
+    parser = get_parser()
+    args = parser.parse_args()
+    (name, action_functor) = args.action
+    action_functor(name, args)
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/analyze/dump.py b/cloudinit/analyze/dump.py
new file mode 100644
index 0000000..ca4da49
--- /dev/null
+++ b/cloudinit/analyze/dump.py
@@ -0,0 +1,176 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import calendar
+from datetime import datetime
+import json
+import sys
+
+from cloudinit import util
+
+stage_to_description = {
+    'finished': 'finished running cloud-init',
+    'init-local': 'starting search for local datasources',
+    'init-network': 'searching for network datasources',
+    'init': 'searching for network datasources',
+    'modules-config': 'running config modules',
+    'modules-final': 'finalizing modules',
+    'modules': 'running modules for',
+    'single': 'running single module ',
+}
+
+# logger's asctime format
+CLOUD_INIT_ASCTIME_FMT = "%Y-%m-%d %H:%M:%S,%f"
+
+# journctl -o short-precise
+CLOUD_INIT_JOURNALCTL_FMT = "%b %d %H:%M:%S.%f %Y"
+
+# other
+DEFAULT_FMT = "%b %d %H:%M:%S %Y"
+
+
+def parse_timestamp(timestampstr):
+    # default syslog time does not include the current year
+    months = [calendar.month_abbr[m] for m in range(1, 13)]
+    if timestampstr.split()[0] in months:
+        # Aug 29 22:55:26
+        FMT = DEFAULT_FMT
+        if '.' in timestampstr:
+            FMT = CLOUD_INIT_JOURNALCTL_FMT
+        dt = datetime.strptime(timestampstr + " " +
+                               str(datetime.now().year),
+                               FMT)
+        timestamp = dt.strftime("%s.%f")
+    elif "," in timestampstr:
+        # 2016-09-12 14:39:20,839
+        dt = datetime.strptime(timestampstr, CLOUD_INIT_ASCTIME_FMT)
+        timestamp = dt.strftime("%s.%f")
+    else:
+        # allow date(1) to handle other formats we don't expect
+        timestamp = parse_timestamp_from_date(timestampstr)
+
+    return float(timestamp)
+
+
+def parse_timestamp_from_date(timestampstr):
+    out, _ = util.subp(['date', '+%s.%3N', '-d', timestampstr])
+    timestamp = out.strip()
+    return float(timestamp)
+
+
+def parse_ci_logline(line):
+    # Stage Starts:
+    # Cloud-init v. 0.7.7 running 'init-local' at \
+    #               Fri, 02 Sep 2016 19:28:07 +0000. Up 1.0 seconds.
+    # Cloud-init v. 0.7.7 running 'init' at \
+    #               Fri, 02 Sep 2016 19:28:08 +0000. Up 2.0 seconds.
+    # Cloud-init v. 0.7.7 finished at
+    # Aug 29 22:55:26 test1 [CLOUDINIT] handlers.py[DEBUG]: \
+    #               finish: modules-final: SUCCESS: running modules for final
+    # 2016-08-30T21:53:25.972325+00:00 y1 [CLOUDINIT] handlers.py[DEBUG]: \
+    #               finish: modules-final: SUCCESS: running modules for final
+    #
+    # Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT] util.py[DEBUG]: \
+    #               Cloud-init v. 0.7.8 running 'init-local' at \
+    #               Thu, 03 Nov 2016 06:51:06 +0000. Up 1.0 seconds.
+    #
+    # 2017-05-22 18:02:01,088 - util.py[DEBUG]: Cloud-init v. 0.7.9 running \
+    #         'init-local' at Mon, 22 May 2017 18:02:01 +0000. Up 2.0 seconds.
+
+    separators = [' - ', ' [CLOUDINIT] ']
+    found = False
+    for sep in separators:
+        if sep in line:
+            found = True
+            break
+
+    if not found:
+        return None
+
+    (timehost, eventstr) = line.split(sep)
+
+    # journalctl -o short-precise
+    if timehost.endswith(":"):
+        timehost = " ".join(timehost.split()[0:-1])
+
+    if "," in timehost:
+        timestampstr, extra = timehost.split(",")
+        timestampstr += ",%s" % extra.split()[0]
+        if ' ' in extra:
+            hostname = extra.split()[-1]
+    else:
+        hostname = timehost.split()[-1]
+        timestampstr = timehost.split(hostname)[0].strip()
+    if 'Cloud-init v.' in eventstr:
+        event_type = 'start'
+        if 'running' in eventstr:
+            stage_and_timestamp = eventstr.split('running')[1].lstrip()
+            event_name, _ = stage_and_timestamp.split(' at ')
+            event_name = event_name.replace("'", "").replace(":", "-")
+            if event_name == "init":
+                event_name = "init-network"
+        else:
+            # don't generate a start for the 'finished at' banner
+            return None
+        event_description = stage_to_description[event_name]
+    else:
+        (pymodloglvl, event_type, event_name) = eventstr.split()[0:3]
+        event_description = eventstr.split(event_name)[1].strip()
+
+    event = {
+        'name': event_name.rstrip(":"),
+        'description': event_description,
+        'timestamp': parse_timestamp(timestampstr),
+        'origin': 'cloudinit',
+        'event_type': event_type.rstrip(":"),
+    }
+    if event['event_type'] == "finish":
+        result = event_description.split(":")[0]
+        desc = event_description.split(result)[1].lstrip(':').strip()
+        event['result'] = result
+        event['description'] = desc.strip()
+
+    return event
+
+
+def json_dumps(data):
+    return json.dumps(data, indent=1, sort_keys=True,
+                      separators=(',', ': '))
+
+
+def dump_events(cisource=None, rawdata=None):
+    events = []
+    event = None
+    CI_EVENT_MATCHES = ['start:', 'finish:', 'Cloud-init v.']
+
+    if not any([cisource, rawdata]):
+        raise ValueError('Either cisource or rawdata parameters are required')
+
+    if rawdata:
+        data = rawdata.splitlines()
+    else:
+        data = cisource.readlines()
+
+    for line in data:
+        for match in CI_EVENT_MATCHES:
+            if match in line:
+                try:
+                    event = parse_ci_logline(line)
+                except ValueError:
+                    sys.stderr.write('Skipping invalid entry\n')
+                if event:
+                    events.append(event)
+
+    return events, data
+
+
+def main():
+    if len(sys.argv) > 1:
+        cisource = open(sys.argv[1])
+    else:
+        cisource = sys.stdin
+
+    return json_dumps(dump_events(cisource))
+
+
+if __name__ == "__main__":
+    print(main())
diff --git a/cloudinit/analyze/show.py b/cloudinit/analyze/show.py
new file mode 100644
index 0000000..3e778b8
--- /dev/null
+++ b/cloudinit/analyze/show.py
@@ -0,0 +1,207 @@
+#   Copyright (C) 2016 Canonical Ltd.
+#
+#   Author: Ryan Harper <ryan.harper@xxxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import base64
+import datetime
+import json
+import os
+
+from cloudinit import util
+
+#  An event:
+'''
+{
+        "description": "executing late commands",
+        "event_type": "start",
+        "level": "INFO",
+        "name": "cmd-install/stage-late"
+        "origin": "cloudinit",
+        "timestamp": 1461164249.1590767,
+},
+
+    {
+        "description": "executing late commands",
+        "event_type": "finish",
+        "level": "INFO",
+        "name": "cmd-install/stage-late",
+        "origin": "cloudinit",
+        "result": "SUCCESS",
+        "timestamp": 1461164249.1590767
+    }
+
+'''
+format_key = {
+    '%d': 'delta',
+    '%D': 'description',
+    '%E': 'elapsed',
+    '%e': 'event_type',
+    '%I': 'indent',
+    '%l': 'level',
+    '%n': 'name',
+    '%o': 'origin',
+    '%r': 'result',
+    '%t': 'timestamp',
+    '%T': 'total_time',
+}
+
+formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v)
+                           for k, v in format_key.items()])
+
+
+def format_record(msg, event):
+    for i, j in format_key.items():
+        if i in msg:
+            # ensure consistent formatting of time values
+            if j in ['delta', 'elapsed', 'timestamp']:
+                msg = msg.replace(i, "{%s:08.5f}" % j)
+            else:
+                msg = msg.replace(i, "{%s}" % j)
+    return msg.format(**event)
+
+
+def dump_event_files(event):
+    content = dict((k, v) for k, v in event.items() if k not in ['content'])
+    files = content['files']
+    saved = []
+    for f in files:
+        fname = f['path']
+        fn_local = os.path.basename(fname)
+        fcontent = base64.b64decode(f['content']).decode('ascii')
+        util.write_file(fn_local, fcontent)
+        saved.append(fn_local)
+
+    return saved
+
+
+def event_name(event):
+    if event:
+        return event.get('name')
+    return None
+
+
+def event_type(event):
+    if event:
+        return event.get('event_type')
+    return None
+
+
+def event_parent(event):
+    if event:
+        return event_name(event).split("/")[0]
+    return None
+
+
+def event_timestamp(event):
+    return float(event.get('timestamp'))
+
+
+def event_datetime(event):
+    return datetime.datetime.utcfromtimestamp(event_timestamp(event))
+
+
+def delta_seconds(t1, t2):
+    return (t2 - t1).total_seconds()
+
+
+def event_duration(start, finish):
+    return delta_seconds(event_datetime(start), event_datetime(finish))
+
+
+def event_record(start_time, start, finish):
+    record = finish.copy()
+    record.update({
+        'delta': event_duration(start, finish),
+        'elapsed': delta_seconds(start_time, event_datetime(start)),
+        'indent': '|' + ' ' * (event_name(start).count('/') - 1) + '`->',
+    })
+
+    return record
+
+
+def total_time_record(total_time):
+    return 'Total Time: %3.5f seconds\n' % total_time
+
+
+def generate_records(events, blame_sort=False,
+                     print_format="(%n) %d seconds in %I%D",
+                     dump_files=False, log_datafiles=False):
+
+    sorted_events = sorted(events, key=lambda x: x['timestamp'])
+    records = []
+    start_time = None
+    total_time = 0.0
+    stage_start_time = {}
+    stages_seen = []
+    boot_records = []
+
+    unprocessed = []
+    for e in range(0, len(sorted_events)):
+        event = events[e]
+        try:
+            next_evt = events[e + 1]
+        except IndexError:
+            next_evt = None
+
+        if event_type(event) == 'start':
+            if event.get('name') in stages_seen:
+                records.append(total_time_record(total_time))
+                boot_records.append(records)
+                records = []
+                start_time = None
+                total_time = 0.0
+
+            if start_time is None:
+                stages_seen = []
+                start_time = event_datetime(event)
+                stage_start_time[event_parent(event)] = start_time
+
+            # see if we have a pair
+            if event_name(event) == event_name(next_evt):
+                if event_type(next_evt) == 'finish':
+                    records.append(format_record(print_format,
+                                                 event_record(start_time,
+                                                              event,
+                                                              next_evt)))
+            else:
+                # This is a parent event
+                records.append("Starting stage: %s" % event.get('name'))
+                unprocessed.append(event)
+                stages_seen.append(event.get('name'))
+                continue
+        else:
+            prev_evt = unprocessed.pop()
+            if event_name(event) == event_name(prev_evt):
+                record = event_record(start_time, prev_evt, event)
+                records.append(format_record("Finished stage: "
+                                             "(%n) %d seconds ",
+                                             record) + "\n")
+                total_time += record.get('delta')
+            else:
+                # not a match, put it back
+                unprocessed.append(prev_evt)
+
+    records.append(total_time_record(total_time))
+    boot_records.append(records)
+    return boot_records
+
+
+def show_events(events, print_format):
+    return generate_records(events, print_format=print_format)
+
+
+def load_events(infile, rawdata=None):
+    if rawdata:
+        data = rawdata.read()
+    else:
+        data = infile.read()
+
+    j = None
+    try:
+        j = json.loads(data)
+    except ValueError:
+        pass
+
+    return j, data
diff --git a/cloudinit/analyze/tests/test_dump.py b/cloudinit/analyze/tests/test_dump.py
new file mode 100644
index 0000000..f4c4284
--- /dev/null
+++ b/cloudinit/analyze/tests/test_dump.py
@@ -0,0 +1,210 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from datetime import datetime
+from textwrap import dedent
+
+from cloudinit.analyze.dump import (
+    dump_events, parse_ci_logline, parse_timestamp)
+from cloudinit.util import subp, write_file
+from cloudinit.tests.helpers import CiTestCase
+
+
+class TestParseTimestamp(CiTestCase):
+
+    def test_parse_timestamp_handles_cloud_init_default_format(self):
+        """Logs with cloud-init detailed formats will be properly parsed."""
+        trusty_fmt = '%Y-%m-%d %H:%M:%S,%f'
+        trusty_stamp = '2016-09-12 14:39:20,839'
+
+        parsed = parse_timestamp(trusty_stamp)
+
+        # convert ourselves
+        dt = datetime.strptime(trusty_stamp, trusty_fmt)
+        expected = float(dt.strftime('%s.%f'))
+
+        # use date(1)
+        out, _err = subp(['date', '+%s.%3N', '-d', trusty_stamp])
+        timestamp = out.strip()
+        date_ts = float(timestamp)
+
+        self.assertEqual(expected, parsed)
+        self.assertEqual(expected, date_ts)
+        self.assertEqual(date_ts, parsed)
+
+    def test_parse_timestamp_handles_syslog_adding_year(self):
+        """Syslog timestamps lack a year. Add year and properly parse."""
+        syslog_fmt = '%b %d %H:%M:%S %Y'
+        syslog_stamp = 'Aug 08 15:12:51'
+
+        # convert stamp ourselves by adding the missing year value
+        year = datetime.now().year
+        dt = datetime.strptime(syslog_stamp + " " + str(year), syslog_fmt)
+        expected = float(dt.strftime('%s.%f'))
+        parsed = parse_timestamp(syslog_stamp)
+
+        # use date(1)
+        out, _ = subp(['date', '+%s.%3N', '-d', syslog_stamp])
+        timestamp = out.strip()
+        date_ts = float(timestamp)
+
+        self.assertEqual(expected, parsed)
+        self.assertEqual(expected, date_ts)
+        self.assertEqual(date_ts, parsed)
+
+    def test_parse_timestamp_handles_journalctl_format_adding_year(self):
+        """Journalctl precise timestamps lack a year. Add year and parse."""
+        journal_fmt = '%b %d %H:%M:%S.%f %Y'
+        journal_stamp = 'Aug 08 17:15:50.606811'
+
+        # convert stamp ourselves by adding the missing year value
+        year = datetime.now().year
+        dt = datetime.strptime(journal_stamp + " " + str(year), journal_fmt)
+        expected = float(dt.strftime('%s.%f'))
+        parsed = parse_timestamp(journal_stamp)
+
+        # use date(1)
+        out, _ = subp(['date', '+%s.%6N', '-d', journal_stamp])
+        timestamp = out.strip()
+        date_ts = float(timestamp)
+
+        self.assertEqual(expected, parsed)
+        self.assertEqual(expected, date_ts)
+        self.assertEqual(date_ts, parsed)
+
+    def test_parse_unexpected_timestamp_format_with_date_command(self):
+        """Dump sends unexpected timestamp formats to data for processing."""
+        new_fmt = '%H:%M %m/%d %Y'
+        new_stamp = '17:15 08/08'
+
+        # convert stamp ourselves by adding the missing year value
+        year = datetime.now().year
+        dt = datetime.strptime(new_stamp + " " + str(year), new_fmt)
+        expected = float(dt.strftime('%s.%f'))
+        parsed = parse_timestamp(new_stamp)
+
+        # use date(1)
+        out, _ = subp(['date', '+%s.%6N', '-d', new_stamp])
+        timestamp = out.strip()
+        date_ts = float(timestamp)
+
+        self.assertEqual(expected, parsed)
+        self.assertEqual(expected, date_ts)
+        self.assertEqual(date_ts, parsed)
+
+
+class TestParseCILogLine(CiTestCase):
+
+    def test_parse_logline_returns_none_without_separators(self):
+        """When no separators are found, parse_ci_logline returns None."""
+        expected_parse_ignores = [
+            '', '-', 'adsf-asdf', '2017-05-22 18:02:01,088', 'CLOUDINIT']
+        for parse_ignores in expected_parse_ignores:
+            self.assertIsNone(parse_ci_logline(parse_ignores))
+
+    def test_parse_logline_returns_event_for_cloud_init_logs(self):
+        """parse_ci_logline returns an event parse from cloud-init format."""
+        line = (
+            "2017-08-08 20:05:07,147 - util.py[DEBUG]: Cloud-init v. 0.7.9"
+            " running 'init-local' at Tue, 08 Aug 2017 20:05:07 +0000. Up"
+            " 6.26 seconds.")
+        dt = datetime.strptime(
+            '2017-08-08 20:05:07,147', '%Y-%m-%d %H:%M:%S,%f')
+        timestamp = float(dt.strftime('%s.%f'))
+        expected = {
+            'description': 'starting search for local datasources',
+            'event_type': 'start',
+            'name': 'init-local',
+            'origin': 'cloudinit',
+            'timestamp': timestamp}
+        self.assertEqual(expected, parse_ci_logline(line))
+
+    def test_parse_logline_returns_event_for_journalctl_logs(self):
+        """parse_ci_logline returns an event parse from journalctl format."""
+        line = ("Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT]"
+                " util.py[DEBUG]: Cloud-init v. 0.7.8 running 'init-local' at"
+                "  Thu, 03 Nov 2016 06:51:06 +0000. Up 1.0 seconds.")
+        year = datetime.now().year
+        dt = datetime.strptime(
+            'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y')
+        timestamp = float(dt.strftime('%s.%f'))
+        expected = {
+            'description': 'starting search for local datasources',
+            'event_type': 'start',
+            'name': 'init-local',
+            'origin': 'cloudinit',
+            'timestamp': timestamp}
+        self.assertEqual(expected, parse_ci_logline(line))
+
+    def test_parse_logline_returns_event_for_finish_events(self):
+        """parse_ci_logline returns a finish event for a parsed log line."""
+        line = ('2016-08-30 21:53:25.972325+00:00 y1 [CLOUDINIT]'
+                ' handlers.py[DEBUG]: finish: modules-final: SUCCESS: running'
+                ' modules for final')
+        expected = {
+            'description': 'running modules for final',
+            'event_type': 'finish',
+            'name': 'modules-final',
+            'origin': 'cloudinit',
+            'result': 'SUCCESS',
+            'timestamp': 1472594005.972}
+        self.assertEqual(expected, parse_ci_logline(line))
+
+
+SAMPLE_LOGS = dedent("""\
+Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT] util.py[DEBUG]:\
+ Cloud-init v. 0.7.8 running 'init-local' at Thu, 03 Nov 2016\
+ 06:51:06 +0000. Up 1.0 seconds.
+2016-08-30 21:53:25.972325+00:00 y1 [CLOUDINIT] handlers.py[DEBUG]: finish:\
+ modules-final: SUCCESS: running modules for final
+""")
+
+
+class TestDumpEvents(CiTestCase):
+    maxDiff = None
+
+    def test_dump_events_with_rawdata(self):
+        """Rawdata is split and parsed into a tuple of events and data"""
+        events, data = dump_events(rawdata=SAMPLE_LOGS)
+        expected_data = SAMPLE_LOGS.splitlines()
+        year = datetime.now().year
+        dt1 = datetime.strptime(
+            'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y')
+        timestamp1 = float(dt1.strftime('%s.%f'))
+        expected_events = [{
+            'description': 'starting search for local datasources',
+            'event_type': 'start',
+            'name': 'init-local',
+            'origin': 'cloudinit',
+            'timestamp': timestamp1}, {
+            'description': 'running modules for final',
+            'event_type': 'finish',
+            'name': 'modules-final',
+            'origin': 'cloudinit',
+            'result': 'SUCCESS',
+            'timestamp': 1472594005.972}]
+        self.assertEqual(expected_events, events)
+        self.assertEqual(expected_data, data)
+
+    def test_dump_events_with_cisource(self):
+        """Cisource file is read and parsed into a tuple of events and data."""
+        tmpfile = self.tmp_path('logfile')
+        write_file(tmpfile, SAMPLE_LOGS)
+        events, data = dump_events(cisource=open(tmpfile))
+        year = datetime.now().year
+        dt1 = datetime.strptime(
+            'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y')
+        timestamp1 = float(dt1.strftime('%s.%f'))
+        expected_events = [{
+            'description': 'starting search for local datasources',
+            'event_type': 'start',
+            'name': 'init-local',
+            'origin': 'cloudinit',
+            'timestamp': timestamp1}, {
+            'description': 'running modules for final',
+            'event_type': 'finish',
+            'name': 'modules-final',
+            'origin': 'cloudinit',
+            'result': 'SUCCESS',
+            'timestamp': 1472594005.972}]
+        self.assertEqual(expected_events, events)
+        self.assertEqual(SAMPLE_LOGS.splitlines(), [d.strip() for d in data])
diff --git a/cloudinit/apport.py b/cloudinit/apport.py
new file mode 100644
index 0000000..221f341
--- /dev/null
+++ b/cloudinit/apport.py
@@ -0,0 +1,105 @@
+# Copyright (C) 2017 Canonical Ltd.
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+'''Cloud-init apport interface'''
+
+try:
+    from apport.hookutils import (
+        attach_file, attach_root_command_outputs, root_command_output)
+    has_apport = True
+except ImportError:
+    has_apport = False
+
+
+KNOWN_CLOUD_NAMES = [
+    'Amazon - Ec2', 'AliYun', 'AltCloud', 'Azure', 'Bigstep', 'CloudSigma',
+    'CloudStack', 'DigitalOcean', 'GCE - Google Compute Engine', 'MAAS',
+    'NoCloud', 'OpenNebula', 'OpenStack', 'OVF', 'Scaleway', 'SmartOS',
+    'VMware', 'Other']
+
+# Potentially clear text collected logs
+CLOUDINIT_LOG = '/var/log/cloud-init.log'
+CLOUDINIT_OUTPUT_LOG = '/var/log/cloud-init-output.log'
+USER_DATA_FILE = '/var/lib/cloud/instance/user-data.txt'  # Optional
+
+
+def attach_cloud_init_logs(report, ui=None):
+    '''Attach cloud-init logs and tarfile from 'cloud-init collect-logs'.'''
+    attach_root_command_outputs(report, {
+        'cloud-init-log-warnings':
+            'egrep -i "warn|error" /var/log/cloud-init.log',
+        'cloud-init-output.log.txt': 'cat /var/log/cloud-init-output.log'})
+    root_command_output(
+        ['cloud-init', 'collect-logs', '-t', '/tmp/cloud-init-logs.tgz'])
+    attach_file(report, '/tmp/cloud-init-logs.tgz', 'logs.tgz')
+
+
+def attach_hwinfo(report, ui=None):
+    '''Optionally attach hardware info from lshw.'''
+    prompt = (
+        'Your device details (lshw) may be useful to developers when'
+        ' addressing this bug, but gathering it requires admin privileges.'
+        ' Would you like to include this info?')
+    if ui and ui.yesno(prompt):
+        attach_root_command_outputs(report, {'lshw.txt': 'lshw'})
+
+
+def attach_cloud_info(report, ui=None):
+    '''Prompt for cloud details if available.'''
+    if ui:
+        prompt = 'Is this machine running in a cloud environment?'
+        response = ui.yesno(prompt)
+        if response is None:
+            raise StopIteration  # User cancelled
+        if response:
+            prompt = ('Please select the cloud vendor or environment in which'
+                      ' this instance is running')
+            response = ui.choice(prompt, KNOWN_CLOUD_NAMES)
+            if response:
+                report['CloudName'] = KNOWN_CLOUD_NAMES[response[0]]
+            else:
+                report['CloudName'] = 'None'
+
+
+def attach_user_data(report, ui=None):
+    '''Optionally provide user-data if desired.'''
+    if ui:
+        prompt = (
+            'Your user-data or cloud-config file can optionally be provided'
+            ' from {0} and could be useful to developers when addressing this'
+            ' bug. Do you wish to attach user-data to this bug?'.format(
+                USER_DATA_FILE))
+        response = ui.yesno(prompt)
+        if response is None:
+            raise StopIteration  # User cancelled
+        if response:
+            attach_file(report, USER_DATA_FILE, 'user_data.txt')
+
+
+def add_bug_tags(report):
+    '''Add any appropriate tags to the bug.'''
+    if 'JournalErrors' in report.keys():
+        errors = report['JournalErrors']
+        if 'Breaking ordering cycle' in errors:
+            report['Tags'] = 'systemd-ordering'
+
+
+def add_info(report, ui):
+    '''This is an entry point to run cloud-init's apport functionality.
+
+    Distros which want apport support will have a cloud-init package-hook at
+    /usr/share/apport/package-hooks/cloud-init.py which defines an add_info
+    function and returns the result of cloudinit.apport.add_info(report, ui).
+    '''
+    if not has_apport:
+        raise RuntimeError(
+            'No apport imports discovered. Apport functionality disabled')
+    attach_cloud_init_logs(report, ui)
+    attach_hwinfo(report, ui)
+    attach_cloud_info(report, ui)
+    attach_user_data(report, ui)
+    add_bug_tags(report)
+    return True
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/cmd/devel/__init__.py b/cloudinit/cmd/devel/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/cloudinit/cmd/devel/__init__.py
diff --git a/cloudinit/cmd/devel/logs.py b/cloudinit/cmd/devel/logs.py
new file mode 100644
index 0000000..35ca478
--- /dev/null
+++ b/cloudinit/cmd/devel/logs.py
@@ -0,0 +1,101 @@
+# Copyright (C) 2017 Canonical Ltd.
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Define 'collect-logs' utility and handler to include in cloud-init cmd."""
+
+import argparse
+from cloudinit.util import (
+    ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file)
+from cloudinit.temp_utils import tempdir
+from datetime import datetime
+import os
+import shutil
+
+
+CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log']
+CLOUDINIT_RUN_DIR = '/run/cloud-init'
+USER_DATA_FILE = '/var/lib/cloud/instance/user-data.txt'  # Optional
+
+
+def get_parser(parser=None):
+    """Build or extend and arg parser for collect-logs utility.
+
+    @param parser: Optional existing ArgumentParser instance representing the
+        collect-logs subcommand which will be extended to support the args of
+        this utility.
+
+    @returns: ArgumentParser with proper argument configuration.
+    """
+    if not parser:
+        parser = argparse.ArgumentParser(
+            prog='collect-logs',
+            description='Collect and tar all cloud-init debug info')
+    parser.add_argument(
+        "--tarfile", '-t', default='cloud-init.tar.gz',
+        help=('The tarfile to create containing all collected logs.'
+              ' Default: cloud-init.tar.gz'))
+    parser.add_argument(
+        "--include-userdata", '-u', default=False, action='store_true',
+        dest='userdata', help=(
+            'Optionally include user-data from {0} which could contain'
+            ' sensitive information.'.format(USER_DATA_FILE)))
+    return parser
+
+
+def _write_command_output_to_file(cmd, filename):
+    """Helper which runs a command and writes output or error to filename."""
+    try:
+        out, _ = subp(cmd)
+    except ProcessExecutionError as e:
+        write_file(filename, str(e))
+    else:
+        write_file(filename, out)
+
+
+def collect_logs(tarfile, include_userdata):
+    """Collect all cloud-init logs and tar them up into the provided tarfile.
+
+    @param tarfile: The path of the tar-gzipped file to create.
+    @param include_userdata: Boolean, true means include user-data.
+    """
+    tarfile = os.path.abspath(tarfile)
+    date = datetime.utcnow().date().strftime('%Y-%m-%d')
+    log_dir = 'cloud-init-logs-{0}'.format(date)
+    with tempdir(dir='/tmp') as tmp_dir:
+        log_dir = os.path.join(tmp_dir, log_dir)
+        _write_command_output_to_file(
+            ['dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'],
+            os.path.join(log_dir, 'version'))
+        _write_command_output_to_file(
+            ['dmesg'], os.path.join(log_dir, 'dmesg.txt'))
+        _write_command_output_to_file(
+            ['journalctl', '-o', 'short-precise'],
+            os.path.join(log_dir, 'journal.txt'))
+        for log in CLOUDINIT_LOGS:
+            copy(log, log_dir)
+        if include_userdata:
+            copy(USER_DATA_FILE, log_dir)
+        run_dir = os.path.join(log_dir, 'run')
+        ensure_dir(run_dir)
+        shutil.copytree(CLOUDINIT_RUN_DIR, os.path.join(run_dir, 'cloud-init'))
+        with chdir(tmp_dir):
+            subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')])
+
+
+def handle_collect_logs_args(name, args):
+    """Handle calls to 'cloud-init collect-logs' as a subcommand."""
+    collect_logs(args.tarfile, args.userdata)
+
+
+def main():
+    """Tool to collect and tar all cloud-init related logs."""
+    parser = get_parser()
+    handle_collect_logs_args('collect-logs', parser.parse_args())
+    return 0
+
+
+if __name__ == '__main__':
+    main()
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/cmd/devel/parser.py b/cloudinit/cmd/devel/parser.py
new file mode 100644
index 0000000..acacc4e
--- /dev/null
+++ b/cloudinit/cmd/devel/parser.py
@@ -0,0 +1,26 @@
+# Copyright (C) 2017 Canonical Ltd.
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Define 'devel' subcommand argument parsers to include in cloud-init cmd."""
+
+import argparse
+from cloudinit.config.schema import (
+    get_parser as schema_parser, handle_schema_args)
+
+
+def get_parser(parser=None):
+    if not parser:
+        parser = argparse.ArgumentParser(
+            prog='cloudinit-devel',
+            description='Run development cloud-init tools')
+    subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand')
+    subparsers.required = True
+
+    parser_schema = subparsers.add_parser(
+        'schema', help='Validate cloud-config files or document schema')
+    # Construct schema subcommand parser
+    schema_parser(parser_schema)
+    parser_schema.set_defaults(action=('schema', handle_schema_args))
+
+    return parser
diff --git a/cloudinit/cmd/devel/tests/__init__.py b/cloudinit/cmd/devel/tests/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/cloudinit/cmd/devel/tests/__init__.py
diff --git a/cloudinit/cmd/devel/tests/test_logs.py b/cloudinit/cmd/devel/tests/test_logs.py
new file mode 100644
index 0000000..dc4947c
--- /dev/null
+++ b/cloudinit/cmd/devel/tests/test_logs.py
@@ -0,0 +1,120 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.cmd.devel import logs
+from cloudinit.util import ensure_dir, load_file, subp, write_file
+from cloudinit.tests.helpers import FilesystemMockingTestCase, wrap_and_call
+from datetime import datetime
+import os
+
+
+class TestCollectLogs(FilesystemMockingTestCase):
+
+    def setUp(self):
+        super(TestCollectLogs, self).setUp()
+        self.new_root = self.tmp_dir()
+        self.run_dir = self.tmp_path('run', self.new_root)
+
+    def test_collect_logs_creates_tarfile(self):
+        """collect-logs creates a tarfile with all related cloud-init info."""
+        log1 = self.tmp_path('cloud-init.log', self.new_root)
+        write_file(log1, 'cloud-init-log')
+        log2 = self.tmp_path('cloud-init-output.log', self.new_root)
+        write_file(log2, 'cloud-init-output-log')
+        ensure_dir(self.run_dir)
+        write_file(self.tmp_path('results.json', self.run_dir), 'results')
+        output_tarfile = self.tmp_path('logs.tgz')
+
+        date = datetime.utcnow().date().strftime('%Y-%m-%d')
+        date_logdir = 'cloud-init-logs-{0}'.format(date)
+
+        expected_subp = {
+            ('dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'):
+                '0.7fake\n',
+            ('dmesg',): 'dmesg-out\n',
+            ('journalctl', '-o', 'short-precise'): 'journal-out\n',
+            ('tar', 'czvf', output_tarfile, date_logdir): ''
+        }
+
+        def fake_subp(cmd):
+            cmd_tuple = tuple(cmd)
+            if cmd_tuple not in expected_subp:
+                raise AssertionError(
+                    'Unexpected command provided to subp: {0}'.format(cmd))
+            if cmd == ['tar', 'czvf', output_tarfile, date_logdir]:
+                subp(cmd)  # Pass through tar cmd so we can check output
+            return expected_subp[cmd_tuple], ''
+
+        wrap_and_call(
+            'cloudinit.cmd.devel.logs',
+            {'subp': {'side_effect': fake_subp},
+             'CLOUDINIT_LOGS': {'new': [log1, log2]},
+             'CLOUDINIT_RUN_DIR': {'new': self.run_dir}},
+            logs.collect_logs, output_tarfile, include_userdata=False)
+        # unpack the tarfile and check file contents
+        subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root])
+        out_logdir = self.tmp_path(date_logdir, self.new_root)
+        self.assertEqual(
+            '0.7fake\n',
+            load_file(os.path.join(out_logdir, 'version')))
+        self.assertEqual(
+            'cloud-init-log',
+            load_file(os.path.join(out_logdir, 'cloud-init.log')))
+        self.assertEqual(
+            'cloud-init-output-log',
+            load_file(os.path.join(out_logdir, 'cloud-init-output.log')))
+        self.assertEqual(
+            'dmesg-out\n',
+            load_file(os.path.join(out_logdir, 'dmesg.txt')))
+        self.assertEqual(
+            'journal-out\n',
+            load_file(os.path.join(out_logdir, 'journal.txt')))
+        self.assertEqual(
+            'results',
+            load_file(
+                os.path.join(out_logdir, 'run', 'cloud-init', 'results.json')))
+
+    def test_collect_logs_includes_optional_userdata(self):
+        """collect-logs include userdata when --include-userdata is set."""
+        log1 = self.tmp_path('cloud-init.log', self.new_root)
+        write_file(log1, 'cloud-init-log')
+        log2 = self.tmp_path('cloud-init-output.log', self.new_root)
+        write_file(log2, 'cloud-init-output-log')
+        userdata = self.tmp_path('user-data.txt', self.new_root)
+        write_file(userdata, 'user-data')
+        ensure_dir(self.run_dir)
+        write_file(self.tmp_path('results.json', self.run_dir), 'results')
+        output_tarfile = self.tmp_path('logs.tgz')
+
+        date = datetime.utcnow().date().strftime('%Y-%m-%d')
+        date_logdir = 'cloud-init-logs-{0}'.format(date)
+
+        expected_subp = {
+            ('dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'):
+                '0.7fake',
+            ('dmesg',): 'dmesg-out\n',
+            ('journalctl', '-o', 'short-precise'): 'journal-out\n',
+            ('tar', 'czvf', output_tarfile, date_logdir): ''
+        }
+
+        def fake_subp(cmd):
+            cmd_tuple = tuple(cmd)
+            if cmd_tuple not in expected_subp:
+                raise AssertionError(
+                    'Unexpected command provided to subp: {0}'.format(cmd))
+            if cmd == ['tar', 'czvf', output_tarfile, date_logdir]:
+                subp(cmd)  # Pass through tar cmd so we can check output
+            return expected_subp[cmd_tuple], ''
+
+        wrap_and_call(
+            'cloudinit.cmd.devel.logs',
+            {'subp': {'side_effect': fake_subp},
+             'CLOUDINIT_LOGS': {'new': [log1, log2]},
+             'CLOUDINIT_RUN_DIR': {'new': self.run_dir},
+             'USER_DATA_FILE': {'new': userdata}},
+            logs.collect_logs, output_tarfile, include_userdata=True)
+        # unpack the tarfile and check file contents
+        subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root])
+        out_logdir = self.tmp_path(date_logdir, self.new_root)
+        self.assertEqual(
+            'user-data',
+            load_file(os.path.join(out_logdir, 'user-data.txt')))
diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py
index 139e03b..6fb9d9e 100644
--- a/cloudinit/cmd/main.py
+++ b/cloudinit/cmd/main.py
@@ -50,13 +50,6 @@ WELCOME_MSG_TPL = ("Cloud-init v. {version} running '{action}' at "
 # Module section template
 MOD_SECTION_TPL = "cloud_%s_modules"
 
-# Things u can query on
-QUERY_DATA_TYPES = [
-    'data',
-    'data_raw',
-    'instance_id',
-]
-
 # Frequency shortname to full name
 # (so users don't have to remember the full name...)
 FREQ_SHORT_NAMES = {
@@ -510,11 +503,6 @@ def main_modules(action_name, args):
     return run_module_section(mods, name, name)
 
 
-def main_query(name, _args):
-    raise NotImplementedError(("Action '%s' is not"
-                               " currently implemented") % (name))
-
-
 def main_single(name, args):
     # Cloud-init single stage is broken up into the following sub-stages
     # 1. Ensure that the init object fetches its config without errors
@@ -688,11 +676,10 @@ def main_features(name, args):
 
 
 def main(sysv_args=None):
-    if sysv_args is not None:
-        parser = argparse.ArgumentParser(prog=sysv_args[0])
-        sysv_args = sysv_args[1:]
-    else:
-        parser = argparse.ArgumentParser()
+    if not sysv_args:
+        sysv_args = sys.argv
+    parser = argparse.ArgumentParser(prog=sysv_args[0])
+    sysv_args = sysv_args[1:]
 
     # Top level args
     parser.add_argument('--version', '-v', action='version',
@@ -713,7 +700,8 @@ def main(sysv_args=None):
                         default=False)
 
     parser.set_defaults(reporter=None)
-    subparsers = parser.add_subparsers()
+    subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand')
+    subparsers.required = True
 
     # Each action and its sub-options (if any)
     parser_init = subparsers.add_parser('init',
@@ -737,17 +725,6 @@ def main(sysv_args=None):
                             choices=('init', 'config', 'final'))
     parser_mod.set_defaults(action=('modules', main_modules))
 
-    # These settings are used when you want to query information
-    # stored in the cloud-init data objects/directories/files
-    parser_query = subparsers.add_parser('query',
-                                         help=('query information stored '
-                                               'in cloud-init'))
-    parser_query.add_argument("--name", '-n', action="store",
-                              help="item name to query on",
-                              required=True,
-                              choices=QUERY_DATA_TYPES)
-    parser_query.set_defaults(action=('query', main_query))
-
     # This subcommand allows you to run a single module
     parser_single = subparsers.add_parser('single',
                                           help=('run a single module '))
@@ -781,15 +758,39 @@ def main(sysv_args=None):
                                             help=('list defined features'))
     parser_features.set_defaults(action=('features', main_features))
 
+    parser_analyze = subparsers.add_parser(
+        'analyze', help='Devel tool: Analyze cloud-init logs and data')
+
+    parser_devel = subparsers.add_parser(
+        'devel', help='Run development tools')
+
+    parser_collect_logs = subparsers.add_parser(
+        'collect-logs', help='Collect and tar all cloud-init debug info')
+
+    if sysv_args:
+        # Only load subparsers if subcommand is specified to avoid load cost
+        if sysv_args[0] == 'analyze':
+            from cloudinit.analyze.__main__ import get_parser as analyze_parser
+            # Construct analyze subcommand parser
+            analyze_parser(parser_analyze)
+        elif sysv_args[0] == 'devel':
+            from cloudinit.cmd.devel.parser import get_parser as devel_parser
+            # Construct devel subcommand parser
+            devel_parser(parser_devel)
+        elif sysv_args[0] == 'collect-logs':
+            from cloudinit.cmd.devel.logs import (
+                get_parser as logs_parser, handle_collect_logs_args)
+            logs_parser(parser_collect_logs)
+            parser_collect_logs.set_defaults(
+                action=('collect-logs', handle_collect_logs_args))
+
     args = parser.parse_args(args=sysv_args)
 
-    try:
-        (name, functor) = args.action
-    except AttributeError:
-        parser.error('too few arguments')
+    # Subparsers.required = True and each subparser sets action=(name, functor)
+    (name, functor) = args.action
 
     # Setup basic logging to start (until reinitialized)
-    # iff in debug mode...
+    # iff in debug mode.
     if args.debug:
         logging.setupBasicLogging()
 
diff --git a/cloudinit/config/cc_bootcmd.py b/cloudinit/config/cc_bootcmd.py
index 604f93b..233da1e 100644
--- a/cloudinit/config/cc_bootcmd.py
+++ b/cloudinit/config/cc_bootcmd.py
@@ -3,44 +3,73 @@
 #
 # Author: Scott Moser <scott.moser@xxxxxxxxxxxxx>
 # Author: Juerg Haefliger <juerg.haefliger@xxxxxx>
+# Author: Chad Smith <chad.smith@xxxxxxxxxxxxx>
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-"""
-Bootcmd
--------
-**Summary:** run commands early in boot process
-
-This module runs arbitrary commands very early in the boot process,
-only slightly after a boothook would run. This is very similar to a
-boothook, but more user friendly. The environment variable ``INSTANCE_ID``
-will be set to the current instance id for all run commands. Commands can be
-specified either as lists or strings. For invocation details, see ``runcmd``.
-
-.. note::
-    bootcmd should only be used for things that could not be done later in the
-    boot process.
-
-**Internal name:** ``cc_bootcmd``
-
-**Module frequency:** per always
-
-**Supported distros:** all
-
-**Config keys**::
-
-    bootcmd:
-        - echo 192.168.1.130 us.archive.ubuntu.com > /etc/hosts
-        - [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ]
-"""
+"""Bootcmd: run arbitrary commands early in the boot process."""
 
 import os
+from textwrap import dedent
 
+from cloudinit.config.schema import (
+    get_schema_doc, validate_cloudconfig_schema)
 from cloudinit.settings import PER_ALWAYS
+from cloudinit import temp_utils
 from cloudinit import util
 
 frequency = PER_ALWAYS
 
+# The schema definition for each cloud-config module is a strict contract for
+# describing supported configuration parameters for each cloud-config section.
+# It allows cloud-config to validate and alert users to invalid or ignored
+# configuration options before actually attempting to deploy with said
+# configuration.
+
+distros = ['all']
+
+schema = {
+    'id': 'cc_bootcmd',
+    'name': 'Bootcmd',
+    'title': 'Run arbitrary commands early in the boot process',
+    'description': dedent("""\
+        This module runs arbitrary commands very early in the boot process,
+        only slightly after a boothook would run. This is very similar to a
+        boothook, but more user friendly. The environment variable
+        ``INSTANCE_ID`` will be set to the current instance id for all run
+        commands. Commands can be specified either as lists or strings. For
+        invocation details, see ``runcmd``.
+
+        .. note::
+            bootcmd should only be used for things that could not be done later
+            in the boot process."""),
+    'distros': distros,
+    'examples': [dedent("""\
+        bootcmd:
+            - echo 192.168.1.130 us.archive.ubuntu.com > /etc/hosts
+            - [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ]
+    """)],
+    'frequency': PER_ALWAYS,
+    'type': 'object',
+    'properties': {
+        'bootcmd': {
+            'type': 'array',
+            'items': {
+                'oneOf': [
+                    {'type': 'array', 'items': {'type': 'string'}},
+                    {'type': 'string'}]
+            },
+            'additionalItems': False,  # Reject items of non-string non-list
+            'additionalProperties': False,
+            'minItems': 1,
+            'required': [],
+            'uniqueItems': True
+        }
+    }
+}
+
+__doc__ = get_schema_doc(schema)  # Supplement python help()
+
 
 def handle(name, cfg, cloud, log, _args):
 
@@ -49,13 +78,14 @@ def handle(name, cfg, cloud, log, _args):
                    " no 'bootcmd' key in configuration"), name)
         return
 
-    with util.ExtendedTemporaryFile(suffix=".sh") as tmpf:
+    validate_cloudconfig_schema(cfg, schema)
+    with temp_utils.ExtendedTemporaryFile(suffix=".sh") as tmpf:
         try:
             content = util.shellify(cfg["bootcmd"])
             tmpf.write(util.encode_text(content))
             tmpf.flush()
-        except Exception:
-            util.logexc(log, "Failed to shellify bootcmd")
+        except Exception as e:
+            util.logexc(log, "Failed to shellify bootcmd: %s", str(e))
             raise
 
         try:
diff --git a/cloudinit/config/cc_chef.py b/cloudinit/config/cc_chef.py
index 02c70b1..46abedd 100644
--- a/cloudinit/config/cc_chef.py
+++ b/cloudinit/config/cc_chef.py
@@ -58,6 +58,9 @@ file).
       log_level:
       log_location:
       node_name:
+      omnibus_url:
+      omnibus_url_retries:
+      omnibus_version:
       pid_file:
       server_url:
       show_time:
@@ -279,6 +282,31 @@ def run_chef(chef_cfg, log):
     util.subp(cmd, capture=False)
 
 
+def install_chef_from_omnibus(url=None, retries=None, omnibus_version=None):
+    """Install an omnibus unified package from url.
+
+    @param url: URL where blob of chef content may be downloaded. Defaults to
+        OMNIBUS_URL.
+    @param retries: Number of retries to perform when attempting to read url.
+        Defaults to OMNIBUS_URL_RETRIES
+    @param omnibus_version: Optional version string to require for omnibus
+        install.
+    """
+    if url is None:
+        url = OMNIBUS_URL
+    if retries is None:
+        retries = OMNIBUS_URL_RETRIES
+
+    if omnibus_version is None:
+        args = []
+    else:
+        args = ['-v', omnibus_version]
+    content = url_helper.readurl(url=url, retries=retries).contents
+    return util.subp_blob_in_tempfile(
+        blob=content, args=args,
+        basename='chef-omnibus-install', capture=False)
+
+
 def install_chef(cloud, chef_cfg, log):
     # If chef is not installed, we install chef based on 'install_type'
     install_type = util.get_cfg_option_str(chef_cfg, 'install_type',
@@ -297,17 +325,11 @@ def install_chef(cloud, chef_cfg, log):
         # This will install and run the chef-client from packages
         cloud.distro.install_packages(('chef',))
     elif install_type == 'omnibus':
-        # This will install as a omnibus unified package
-        url = util.get_cfg_option_str(chef_cfg, "omnibus_url", OMNIBUS_URL)
-        retries = max(0, util.get_cfg_option_int(chef_cfg,
-                                                 "omnibus_url_retries",
-                                                 default=OMNIBUS_URL_RETRIES))
-        content = url_helper.readurl(url=url, retries=retries).contents
-        with util.tempdir() as tmpd:
-            # Use tmpdir over tmpfile to avoid 'text file busy' on execute
-            tmpf = "%s/chef-omnibus-install" % tmpd
-            util.write_file(tmpf, content, mode=0o700)
-            util.subp([tmpf], capture=False)
+        omnibus_version = util.get_cfg_option_str(chef_cfg, "omnibus_version")
+        install_chef_from_omnibus(
+            url=util.get_cfg_option_str(chef_cfg, "omnibus_url"),
+            retries=util.get_cfg_option_int(chef_cfg, "omnibus_url_retries"),
+            omnibus_version=omnibus_version)
     else:
         log.warn("Unknown chef install type '%s'", install_type)
         run = False
diff --git a/cloudinit/config/cc_landscape.py b/cloudinit/config/cc_landscape.py
index 86b7138..8f9f1ab 100644
--- a/cloudinit/config/cc_landscape.py
+++ b/cloudinit/config/cc_landscape.py
@@ -57,7 +57,7 @@ The following default client config is provided, but can be overridden::
 
 import os
 
-from six import StringIO
+from six import BytesIO
 
 from configobj import ConfigObj
 
@@ -109,7 +109,7 @@ def handle(_name, cfg, cloud, log, _args):
         ls_cloudcfg,
     ]
     merged = merge_together(merge_data)
-    contents = StringIO()
+    contents = BytesIO()
     merged.write(contents)
 
     util.ensure_dir(os.path.dirname(LSC_CLIENT_CFG_FILE))
diff --git a/cloudinit/config/cc_ntp.py b/cloudinit/config/cc_ntp.py
index 31ed64e..15ae1ec 100644
--- a/cloudinit/config/cc_ntp.py
+++ b/cloudinit/config/cc_ntp.py
@@ -4,39 +4,10 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-"""
-NTP
----
-**Summary:** enable and configure ntp
-
-Handle ntp configuration. If ntp is not installed on the system and ntp
-configuration is specified, ntp will be installed. If there is a default ntp
-config file in the image or one is present in the distro's ntp package, it will
-be copied to ``/etc/ntp.conf.dist`` before any changes are made. A list of ntp
-pools and ntp servers can be provided under the ``ntp`` config key. If no ntp
-servers or pools are provided, 4 pools will be used in the format
-``{0-3}.{distro}.pool.ntp.org``.
-
-**Internal name:** ``cc_ntp``
-
-**Module frequency:** per instance
-
-**Supported distros:** centos, debian, fedora, opensuse, ubuntu
-
-**Config keys**::
-
-    ntp:
-        pools:
-            - 0.company.pool.ntp.org
-            - 1.company.pool.ntp.org
-            - ntp.myorg.org
-        servers:
-            - my.ntp.server.local
-            - ntp.ubuntu.com
-            - 192.168.23.2
-"""
+"""NTP: enable and configure ntp"""
 
-from cloudinit.config.schema import validate_cloudconfig_schema
+from cloudinit.config.schema import (
+    get_schema_doc, validate_cloudconfig_schema)
 from cloudinit import log as logging
 from cloudinit.settings import PER_INSTANCE
 from cloudinit import templater
@@ -50,6 +21,7 @@ LOG = logging.getLogger(__name__)
 
 frequency = PER_INSTANCE
 NTP_CONF = '/etc/ntp.conf'
+TIMESYNCD_CONF = '/etc/systemd/timesyncd.conf.d/cloud-init.conf'
 NR_POOL_SERVERS = 4
 distros = ['centos', 'debian', 'fedora', 'opensuse', 'ubuntu']
 
@@ -75,10 +47,13 @@ schema = {
         ``{0-3}.{distro}.pool.ntp.org``."""),
     'distros': distros,
     'examples': [
-        {'ntp': {'pools': ['0.company.pool.ntp.org', '1.company.pool.ntp.org',
-                           'ntp.myorg.org'],
-                 'servers': ['my.ntp.server.local', 'ntp.ubuntu.com',
-                             '192.168.23.2']}}],
+        dedent("""\
+        ntp:
+          pools: [0.int.pool.ntp.org, 1.int.pool.ntp.org, ntp.myorg.org]
+          servers:
+            - ntp.server.local
+            - ntp.ubuntu.com
+            - 192.168.23.2""")],
     'frequency': PER_INSTANCE,
     'type': 'object',
     'properties': {
@@ -116,6 +91,8 @@ schema = {
     }
 }
 
+__doc__ = get_schema_doc(schema)  # Supplement python help()
+
 
 def handle(name, cfg, cloud, log, _args):
     """Enable and configure ntp."""
@@ -132,20 +109,50 @@ def handle(name, cfg, cloud, log, _args):
                             " is a %s %instead"), type_utils.obj_name(ntp_cfg))
 
     validate_cloudconfig_schema(cfg, schema)
+    if ntp_installable():
+        service_name = 'ntp'
+        confpath = NTP_CONF
+        template_name = None
+        packages = ['ntp']
+        check_exe = 'ntpd'
+    else:
+        service_name = 'systemd-timesyncd'
+        confpath = TIMESYNCD_CONF
+        template_name = 'timesyncd.conf'
+        packages = []
+        check_exe = '/lib/systemd/systemd-timesyncd'
+
     rename_ntp_conf()
     # ensure when ntp is installed it has a configuration file
     # to use instead of starting up with packaged defaults
-    write_ntp_config_template(ntp_cfg, cloud)
-    install_ntp(cloud.distro.install_packages, packages=['ntp'],
-                check_exe="ntpd")
-    # if ntp was already installed, it may not have started
+    write_ntp_config_template(ntp_cfg, cloud, confpath, template=template_name)
+    install_ntp(cloud.distro.install_packages, packages=packages,
+                check_exe=check_exe)
+
     try:
-        reload_ntp(systemd=cloud.distro.uses_systemd())
+        reload_ntp(service_name, systemd=cloud.distro.uses_systemd())
     except util.ProcessExecutionError as e:
         LOG.exception("Failed to reload/start ntp service: %s", e)
         raise
 
 
+def ntp_installable():
+    """Check if we can install ntp package
+
+    Ubuntu-Core systems do not have an ntp package available, so
+    we always return False.  Other systems require package managers to install
+    the ntp package If we fail to find one of the package managers, then we
+    cannot install ntp.
+    """
+    if util.system_is_snappy():
+        return False
+
+    if any(map(util.which, ['apt-get', 'dnf', 'yum', 'zypper'])):
+        return True
+
+    return False
+
+
 def install_ntp(install_func, packages=None, check_exe="ntpd"):
     if util.which(check_exe):
         return
@@ -156,7 +163,7 @@ def install_ntp(install_func, packages=None, check_exe="ntpd"):
 
 
 def rename_ntp_conf(config=None):
-    """Rename any existing ntp.conf file and render from template"""
+    """Rename any existing ntp.conf file"""
     if config is None:  # For testing
         config = NTP_CONF
     if os.path.exists(config):
@@ -171,7 +178,7 @@ def generate_server_names(distro):
     return names
 
 
-def write_ntp_config_template(cfg, cloud):
+def write_ntp_config_template(cfg, cloud, path, template=None):
     servers = cfg.get('servers', [])
     pools = cfg.get('pools', [])
 
@@ -185,19 +192,20 @@ def write_ntp_config_template(cfg, cloud):
         'pools': pools,
     }
 
-    template_fn = cloud.get_template_filename('ntp.conf.%s' %
-                                              (cloud.distro.name))
+    if template is None:
+        template = 'ntp.conf.%s' % cloud.distro.name
+
+    template_fn = cloud.get_template_filename(template)
     if not template_fn:
         template_fn = cloud.get_template_filename('ntp.conf')
         if not template_fn:
             raise RuntimeError(("No template found, "
-                                "not rendering %s"), NTP_CONF)
+                                "not rendering %s"), path)
 
-    templater.render_to_file(template_fn, NTP_CONF, params)
+    templater.render_to_file(template_fn, path, params)
 
 
-def reload_ntp(systemd=False):
-    service = 'ntp'
+def reload_ntp(service, systemd=False):
     if systemd:
         cmd = ['systemctl', 'reload-or-restart', service]
     else:
diff --git a/cloudinit/config/cc_puppet.py b/cloudinit/config/cc_puppet.py
index dc11561..28b1d56 100644
--- a/cloudinit/config/cc_puppet.py
+++ b/cloudinit/config/cc_puppet.py
@@ -15,21 +15,23 @@ This module handles puppet installation and configuration. If the ``puppet``
 key does not exist in global configuration, no action will be taken. If a
 config entry for ``puppet`` is present, then by default the latest version of
 puppet will be installed. If ``install`` is set to ``false``, puppet will not
-be installed. However, this may result in an error if puppet is not already
+be installed. However, this will result in an error if puppet is not already
 present on the system. The version of puppet to be installed can be specified
 under ``version``, and defaults to ``none``, which selects the latest version
 in the repos. If the ``puppet`` config key exists in the config archive, this
 module will attempt to start puppet even if no installation was performed.
 
-Puppet configuration can be specified under the ``conf`` key. The configuration
-is specified as a dictionary which is converted into ``<key>=<value>`` format
-and appended to ``puppet.conf`` under the ``[puppetd]`` section. The
+Puppet configuration can be specified under the ``conf`` key. The
+configuration is specified as a dictionary containing high-level ``<section>``
+keys and lists of ``<key>=<value>`` pairs within each section. Each section
+name and ``<key>=<value>`` pair is written directly to ``puppet.conf``. As
+such,  section names should be one of: ``main``, ``master``, ``agent`` or
+``user`` and keys should be valid puppet configuration options. The
 ``certname`` key supports string substitutions for ``%i`` and ``%f``,
 corresponding to the instance id and fqdn of the machine respectively.
-If ``ca_cert`` is present under ``conf``, it will not be written to
-``puppet.conf``, but instead will be used as the puppermaster certificate.
-It should be specified in pem format as a multi-line string (using the ``|``
-yaml notation).
+If ``ca_cert`` is present, it will not be written to ``puppet.conf``, but
+instead will be used as the puppermaster certificate. It should be specified
+in pem format as a multi-line string (using the ``|`` yaml notation).
 
 **Internal name:** ``cc_puppet``
 
@@ -43,12 +45,13 @@ yaml notation).
         install: <true/false>
         version: <version>
         conf:
-            server: "puppetmaster.example.org"
-            certname: "%i.%f"
-            ca_cert: |
-                -------BEGIN CERTIFICATE-------
-                <cert data>
-                -------END CERTIFICATE-------
+            agent:
+                server: "puppetmaster.example.org"
+                certname: "%i.%f"
+                ca_cert: |
+                    -------BEGIN CERTIFICATE-------
+                    <cert data>
+                    -------END CERTIFICATE-------
 """
 
 from six import StringIO
@@ -127,7 +130,7 @@ def handle(name, cfg, cloud, log, _args):
                 util.write_file(PUPPET_SSL_CERT_PATH, cfg)
                 util.chownbyname(PUPPET_SSL_CERT_PATH, 'puppet', 'root')
             else:
-                # Iterate throug the config items, we'll use ConfigParser.set
+                # Iterate through the config items, we'll use ConfigParser.set
                 # to overwrite or create new items as needed
                 for (o, v) in cfg.items():
                     if o == 'certname':
diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py
index ceee952..f774baa 100644
--- a/cloudinit/config/cc_resizefs.py
+++ b/cloudinit/config/cc_resizefs.py
@@ -6,31 +6,8 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-"""
-Resizefs
---------
-**Summary:** resize filesystem
+"""Resizefs: cloud-config module which resizes the filesystem"""
 
-Resize a filesystem to use all avaliable space on partition. This module is
-useful along with ``cc_growpart`` and will ensure that if the root partition
-has been resized the root filesystem will be resized along with it. By default,
-``cc_resizefs`` will resize the root partition and will block the boot process
-while the resize command is running. Optionally, the resize operation can be
-performed in the background while cloud-init continues running modules. This
-can be enabled by setting ``resize_rootfs`` to ``true``. This module can be
-disabled altogether by setting ``resize_rootfs`` to ``false``.
-
-**Internal name:** ``cc_resizefs``
-
-**Module frequency:** per always
-
-**Supported distros:** all
-
-**Config keys**::
-
-    resize_rootfs: <true/false/"noblock">
-    resize_rootfs_tmp: <directory>
-"""
 
 import errno
 import getopt
@@ -38,11 +15,47 @@ import os
 import re
 import shlex
 import stat
+from textwrap import dedent
 
+from cloudinit.config.schema import (
+    get_schema_doc, validate_cloudconfig_schema)
 from cloudinit.settings import PER_ALWAYS
 from cloudinit import util
 
+NOBLOCK = "noblock"
+
 frequency = PER_ALWAYS
+distros = ['all']
+
+schema = {
+    'id': 'cc_resizefs',
+    'name': 'Resizefs',
+    'title': 'Resize filesystem',
+    'description': dedent("""\
+        Resize a filesystem to use all avaliable space on partition. This
+        module is useful along with ``cc_growpart`` and will ensure that if the
+        root partition has been resized the root filesystem will be resized
+        along with it. By default, ``cc_resizefs`` will resize the root
+        partition and will block the boot process while the resize command is
+        running. Optionally, the resize operation can be performed in the
+        background while cloud-init continues running modules. This can be
+        enabled by setting ``resize_rootfs`` to ``true``. This module can be
+        disabled altogether by setting ``resize_rootfs`` to ``false``."""),
+    'distros': distros,
+    'examples': [
+        'resize_rootfs: false  # disable root filesystem resize operation'],
+    'frequency': PER_ALWAYS,
+    'type': 'object',
+    'properties': {
+        'resize_rootfs': {
+            'enum': [True, False, NOBLOCK],
+            'description': dedent("""\
+                Whether to resize the root partition. Default: 'true'""")
+        }
+    }
+}
+
+__doc__ = get_schema_doc(schema)  # Supplement python help()
 
 
 def _resize_btrfs(mount_point, devpth):
@@ -54,7 +67,7 @@ def _resize_ext(mount_point, devpth):
 
 
 def _resize_xfs(mount_point, devpth):
-    return ('xfs_growfs', devpth)
+    return ('xfs_growfs', mount_point)
 
 
 def _resize_ufs(mount_point, devpth):
@@ -131,8 +144,6 @@ RESIZE_FS_PRECHECK_CMDS = {
     'ufs': _can_skip_resize_ufs
 }
 
-NOBLOCK = "noblock"
-
 
 def rootdev_from_cmdline(cmdline):
     found = None
@@ -161,71 +172,77 @@ def can_skip_resize(fs_type, resize_what, devpth):
     return False
 
 
-def handle(name, cfg, _cloud, log, args):
-    if len(args) != 0:
-        resize_root = args[0]
-    else:
-        resize_root = util.get_cfg_option_str(cfg, "resize_rootfs", True)
+def is_device_path_writable_block(devpath, info, log):
+    """Return True if devpath is a writable block device.
 
-    if not util.translate_bool(resize_root, addons=[NOBLOCK]):
-        log.debug("Skipping module named %s, resizing disabled", name)
-        return
-
-    # TODO(harlowja) is the directory ok to be used??
-    resize_root_d = util.get_cfg_option_str(cfg, "resize_rootfs_tmp", "/run")
-    util.ensure_dir(resize_root_d)
-
-    # TODO(harlowja): allow what is to be resized to be configurable??
-    resize_what = "/"
-    result = util.get_mount_info(resize_what, log)
-    if not result:
-        log.warn("Could not determine filesystem type of %s", resize_what)
-        return
-
-    (devpth, fs_type, mount_point) = result
-
-    info = "dev=%s mnt_point=%s path=%s" % (devpth, mount_point, resize_what)
-    log.debug("resize_info: %s" % info)
+    @param devpath: Path to the root device we want to resize.
+    @param info: String representing information about the requested device.
+    @param log: Logger to which logs will be added upon error.
 
+    @returns Boolean True if block device is writable
+    """
     container = util.is_container()
 
     # Ensure the path is a block device.
-    if (devpth == "/dev/root" and not os.path.exists(devpth) and
+    if (devpath == "/dev/root" and not os.path.exists(devpath) and
             not container):
-        devpth = util.rootdev_from_cmdline(util.get_cmdline())
-        if devpth is None:
+        devpath = util.rootdev_from_cmdline(util.get_cmdline())
+        if devpath is None:
             log.warn("Unable to find device '/dev/root'")
-            return
-        log.debug("Converted /dev/root to '%s' per kernel cmdline", devpth)
+            return False
+        log.debug("Converted /dev/root to '%s' per kernel cmdline", devpath)
+
+    if devpath == 'overlayroot':
+        log.debug("Not attempting to resize devpath '%s': %s", devpath, info)
+        return False
 
     try:
-        statret = os.stat(devpth)
+        statret = os.stat(devpath)
     except OSError as exc:
         if container and exc.errno == errno.ENOENT:
             log.debug("Device '%s' did not exist in container. "
-                      "cannot resize: %s", devpth, info)
+                      "cannot resize: %s", devpath, info)
         elif exc.errno == errno.ENOENT:
             log.warn("Device '%s' did not exist. cannot resize: %s",
-                     devpth, info)
+                     devpath, info)
         else:
             raise exc
-        return
-
-    if not os.access(devpth, os.W_OK):
-        if container:
-            log.debug("'%s' not writable in container. cannot resize: %s",
-                      devpth, info)
-        else:
-            log.warn("'%s' not writable. cannot resize: %s", devpth, info)
-        return
+        return False
 
     if not stat.S_ISBLK(statret.st_mode) and not stat.S_ISCHR(statret.st_mode):
         if container:
             log.debug("device '%s' not a block device in container."
-                      " cannot resize: %s" % (devpth, info))
+                      " cannot resize: %s" % (devpath, info))
         else:
             log.warn("device '%s' not a block device. cannot resize: %s" %
-                     (devpth, info))
+                     (devpath, info))
+        return False
+    return True
+
+
+def handle(name, cfg, _cloud, log, args):
+    if len(args) != 0:
+        resize_root = args[0]
+    else:
+        resize_root = util.get_cfg_option_str(cfg, "resize_rootfs", True)
+    validate_cloudconfig_schema(cfg, schema)
+    if not util.translate_bool(resize_root, addons=[NOBLOCK]):
+        log.debug("Skipping module named %s, resizing disabled", name)
+        return
+
+    # TODO(harlowja): allow what is to be resized to be configurable??
+    resize_what = "/"
+    result = util.get_mount_info(resize_what, log)
+    if not result:
+        log.warn("Could not determine filesystem type of %s", resize_what)
+        return
+
+    (devpth, fs_type, mount_point) = result
+
+    info = "dev=%s mnt_point=%s path=%s" % (devpth, mount_point, resize_what)
+    log.debug("resize_info: %s" % info)
+
+    if not is_device_path_writable_block(devpth, info, log):
         return
 
     resizer = None
diff --git a/cloudinit/config/cc_resolv_conf.py b/cloudinit/config/cc_resolv_conf.py
index 2548d1f..9812562 100644
--- a/cloudinit/config/cc_resolv_conf.py
+++ b/cloudinit/config/cc_resolv_conf.py
@@ -55,7 +55,7 @@ LOG = logging.getLogger(__name__)
 
 frequency = PER_INSTANCE
 
-distros = ['fedora', 'rhel', 'sles']
+distros = ['fedora', 'opensuse', 'rhel', 'sles']
 
 
 def generate_resolv_conf(template_fn, params, target_fname="/etc/resolv.conf"):
diff --git a/cloudinit/config/cc_runcmd.py b/cloudinit/config/cc_runcmd.py
index dfa8cb3..449872f 100644
--- a/cloudinit/config/cc_runcmd.py
+++ b/cloudinit/config/cc_runcmd.py
@@ -6,41 +6,70 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-"""
-Runcmd
-------
-**Summary:** run commands
+"""Runcmd: run arbitrary commands at rc.local with output to the console"""
 
-Run arbitrary commands at a rc.local like level with output to the console.
-Each item can be either a list or a string. If the item is a list, it will be
-properly executed as if passed to ``execve()`` (with the first arg as the
-command). If the item is a string, it will be written to a file and interpreted
-using ``sh``.
-
-.. note::
-    all commands must be proper yaml, so you have to quote any characters yaml
-    would eat (':' can be problematic)
-
-**Internal name:** ``cc_runcmd``
+from cloudinit.config.schema import (
+    get_schema_doc, validate_cloudconfig_schema)
+from cloudinit.distros import ALL_DISTROS
+from cloudinit.settings import PER_INSTANCE
+from cloudinit import util
 
-**Module frequency:** per instance
+import os
+from textwrap import dedent
 
-**Supported distros:** all
 
-**Config keys**::
+# The schema definition for each cloud-config module is a strict contract for
+# describing supported configuration parameters for each cloud-config section.
+# It allows cloud-config to validate and alert users to invalid or ignored
+# configuration options before actually attempting to deploy with said
+# configuration.
 
-    runcmd:
-        - [ ls, -l, / ]
-        - [ sh, -xc, "echo $(date) ': hello world!'" ]
-        - [ sh, -c, echo "=========hello world'=========" ]
-        - ls -l /root
-        - [ wget, "http://example.org";, -O, /tmp/index.html ]
-"""
+distros = [ALL_DISTROS]
 
+schema = {
+    'id': 'cc_runcmd',
+    'name': 'Runcmd',
+    'title': 'Run arbitrary commands',
+    'description': dedent("""\
+        Run arbitrary commands at a rc.local like level with output to the
+        console. Each item can be either a list or a string. If the item is a
+        list, it will be properly executed as if passed to ``execve()`` (with
+        the first arg as the command). If the item is a string, it will be
+        written to a file and interpreted
+        using ``sh``.
 
-import os
+        .. note::
+        all commands must be proper yaml, so you have to quote any characters
+        yaml would eat (':' can be problematic)"""),
+    'distros': distros,
+    'examples': [dedent("""\
+        runcmd:
+            - [ ls, -l, / ]
+            - [ sh, -xc, "echo $(date) ': hello world!'" ]
+            - [ sh, -c, echo "=========hello world'=========" ]
+            - ls -l /root
+            - [ wget, "http://example.org";, -O, /tmp/index.html ]
+    """)],
+    'frequency': PER_INSTANCE,
+    'type': 'object',
+    'properties': {
+        'runcmd': {
+            'type': 'array',
+            'items': {
+                'oneOf': [
+                    {'type': 'array', 'items': {'type': 'string'}},
+                    {'type': 'string'}]
+            },
+            'additionalItems': False,  # Reject items of non-string non-list
+            'additionalProperties': False,
+            'minItems': 1,
+            'required': [],
+            'uniqueItems': True
+        }
+    }
+}
 
-from cloudinit import util
+__doc__ = get_schema_doc(schema)  # Supplement python help()
 
 
 def handle(name, cfg, cloud, log, _args):
@@ -49,6 +78,7 @@ def handle(name, cfg, cloud, log, _args):
                    " no 'runcmd' key in configuration"), name)
         return
 
+    validate_cloudconfig_schema(cfg, schema)
     out_fn = os.path.join(cloud.get_ipath('scripts'), "runcmd")
     cmd = cfg["runcmd"]
     try:
diff --git a/cloudinit/config/cc_snappy.py b/cloudinit/config/cc_snappy.py
index a9682f1..eecb817 100644
--- a/cloudinit/config/cc_snappy.py
+++ b/cloudinit/config/cc_snappy.py
@@ -63,11 +63,11 @@ is ``auto``. Options are:
 
 from cloudinit import log as logging
 from cloudinit.settings import PER_INSTANCE
+from cloudinit import temp_utils
 from cloudinit import util
 
 import glob
 import os
-import tempfile
 
 LOG = logging.getLogger(__name__)
 
@@ -183,7 +183,7 @@ def render_snap_op(op, name, path=None, cfgfile=None, config=None):
             #      config
             # Note, however, we do not touch config files on disk.
             nested_cfg = {'config': {shortname: config}}
-            (fd, cfg_tmpf) = tempfile.mkstemp()
+            (fd, cfg_tmpf) = temp_utils.mkstemp()
             os.write(fd, util.yaml_dumps(nested_cfg).encode())
             os.close(fd)
             cfgfile = cfg_tmpf
diff --git a/cloudinit/config/cc_ssh_authkey_fingerprints.py b/cloudinit/config/cc_ssh_authkey_fingerprints.py
index 0066e97..35d8c57 100755
--- a/cloudinit/config/cc_ssh_authkey_fingerprints.py
+++ b/cloudinit/config/cc_ssh_authkey_fingerprints.py
@@ -28,7 +28,7 @@ the keys can be specified, but defaults to ``md5``.
 import base64
 import hashlib
 
-from prettytable import PrettyTable
+from cloudinit.simpletable import SimpleTable
 
 from cloudinit.distros import ug_util
 from cloudinit import ssh_util
@@ -74,7 +74,7 @@ def _pprint_key_entries(user, key_fn, key_entries, hash_meth='md5',
         return
     tbl_fields = ['Keytype', 'Fingerprint (%s)' % (hash_meth), 'Options',
                   'Comment']
-    tbl = PrettyTable(tbl_fields)
+    tbl = SimpleTable(tbl_fields)
     for entry in key_entries:
         if _is_printable_key(entry):
             row = []
diff --git a/cloudinit/config/cc_ubuntu_init_switch.py b/cloudinit/config/cc_ubuntu_init_switch.py
deleted file mode 100644
index 5dd2690..0000000
--- a/cloudinit/config/cc_ubuntu_init_switch.py
+++ /dev/null
@@ -1,160 +0,0 @@
-# Copyright (C) 2014 Canonical Ltd.
-#
-# Author: Scott Moser <scott.moser@xxxxxxxxxxxxx>
-#
-# This file is part of cloud-init. See LICENSE file for license information.
-
-"""
-Ubuntu Init Switch
-------------------
-**Summary:** reboot system into another init.
-
-This module provides a way for the user to boot with systemd even if the image
-is set to boot with upstart. It should be run as one of the first
-``cloud_init_modules``, and will switch the init system and then issue a
-reboot. The next boot will come up in the target init system and no action
-will be taken. This should be inert on non-ubuntu systems, and also
-exit quickly.
-
-.. note::
-    best effort is made, but it's possible this system will break, and probably
-    won't interact well with any other mechanism you've used to switch the init
-    system.
-
-**Internal name:** ``cc_ubuntu_init_switch``
-
-**Module frequency:** once per instance
-
-**Supported distros:** ubuntu
-
-**Config keys**::
-
-    init_switch:
-      target: systemd (can be 'systemd' or 'upstart')
-      reboot: true (reboot if a change was made, or false to not reboot)
-"""
-
-from cloudinit.distros import ubuntu
-from cloudinit import log as logging
-from cloudinit.settings import PER_INSTANCE
-from cloudinit import util
-
-import os
-import time
-
-frequency = PER_INSTANCE
-REBOOT_CMD = ["/sbin/reboot", "--force"]
-
-DEFAULT_CONFIG = {
-    'init_switch': {'target': None, 'reboot': True}
-}
-
-SWITCH_INIT = """
-#!/bin/sh
-# switch_init: [upstart | systemd]
-
-is_systemd() {
-   [ "$(dpkg-divert --listpackage /sbin/init)" = "systemd-sysv" ]
-}
-debug() { echo "$@" 1>&2; }
-fail() { echo "$@" 1>&2; exit 1; }
-
-if [ "$1" = "systemd" ]; then
-   if is_systemd; then
-      debug "already systemd, nothing to do"
-   else
-      [ -f /lib/systemd/systemd ] || fail "no systemd available";
-      dpkg-divert --package systemd-sysv --divert /sbin/init.diverted \\
-          --rename /sbin/init
-   fi
-   [ -f /sbin/init ] || ln /lib/systemd/systemd /sbin/init
-elif [ "$1" = "upstart" ]; then
-   if is_systemd; then
-      rm -f /sbin/init
-      dpkg-divert --package systemd-sysv --rename --remove /sbin/init
-   else
-      debug "already upstart, nothing to do."
-   fi
-else
-  fail "Error. expect 'upstart' or 'systemd'"
-fi
-"""
-
-distros = ['ubuntu']
-
-
-def handle(name, cfg, cloud, log, args):
-    """Handler method activated by cloud-init."""
-
-    if not isinstance(cloud.distro, ubuntu.Distro):
-        log.debug("%s: distro is '%s', not ubuntu. returning",
-                  name, cloud.distro.__class__)
-        return
-
-    cfg = util.mergemanydict([cfg, DEFAULT_CONFIG])
-    target = cfg['init_switch']['target']
-    reboot = cfg['init_switch']['reboot']
-
-    if len(args) != 0:
-        target = args[0]
-        if len(args) > 1:
-            reboot = util.is_true(args[1])
-
-    if not target:
-        log.debug("%s: target=%s. nothing to do", name, target)
-        return
-
-    if not util.which('dpkg'):
-        log.warn("%s: 'dpkg' not available. Assuming not ubuntu", name)
-        return
-
-    supported = ('upstart', 'systemd')
-    if target not in supported:
-        log.warn("%s: target set to %s, expected one of: %s",
-                 name, target, str(supported))
-
-    if os.path.exists("/run/systemd/system"):
-        current = "systemd"
-    else:
-        current = "upstart"
-
-    if current == target:
-        log.debug("%s: current = target = %s. nothing to do", name, target)
-        return
-
-    try:
-        util.subp(['sh', '-s', target], data=SWITCH_INIT)
-    except util.ProcessExecutionError as e:
-        log.warn("%s: Failed to switch to init '%s'. %s", name, target, e)
-        return
-
-    if util.is_false(reboot):
-        log.info("%s: switched '%s' to '%s'. reboot=false, not rebooting.",
-                 name, current, target)
-        return
-
-    try:
-        log.warn("%s: switched '%s' to '%s'. rebooting.",
-                 name, current, target)
-        logging.flushLoggers(log)
-        _fire_reboot(log, wait_attempts=4, initial_sleep=4)
-    except Exception as e:
-        util.logexc(log, "Requested reboot did not happen!")
-        raise
-
-
-def _fire_reboot(log, wait_attempts=6, initial_sleep=1, backoff=2):
-    util.subp(REBOOT_CMD)
-    start = time.time()
-    wait_time = initial_sleep
-    for _i in range(0, wait_attempts):
-        time.sleep(wait_time)
-        wait_time *= backoff
-        elapsed = time.time() - start
-        log.debug("Rebooted, but still running after %s seconds", int(elapsed))
-    # If we got here, not good
-    elapsed = time.time() - start
-    raise RuntimeError(("Reboot did not happen"
-                        " after %s seconds!") % (int(elapsed)))
-
-# vi: ts=4 expandtab
diff --git a/cloudinit/config/cc_zypper_add_repo.py b/cloudinit/config/cc_zypper_add_repo.py
new file mode 100644
index 0000000..aba2695
--- /dev/null
+++ b/cloudinit/config/cc_zypper_add_repo.py
@@ -0,0 +1,218 @@
+#
+#    Copyright (C) 2017 SUSE LLC.
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""zypper_add_repo: Add zyper repositories to the system"""
+
+import configobj
+import os
+from six import string_types
+from textwrap import dedent
+
+from cloudinit.config.schema import get_schema_doc
+from cloudinit import log as logging
+from cloudinit.settings import PER_ALWAYS
+from cloudinit import util
+
+distros = ['opensuse', 'sles']
+
+schema = {
+    'id': 'cc_zypper_add_repo',
+    'name': 'ZypperAddRepo',
+    'title': 'Configure zypper behavior and add zypper repositories',
+    'description': dedent("""\
+        Configure zypper behavior by modifying /etc/zypp/zypp.conf. The
+        configuration writer is "dumb" and will simply append the provided
+        configuration options to the configuration file. Option settings
+        that may be duplicate will be resolved by the way the zypp.conf file
+        is parsed. The file is in INI format.
+        Add repositories to the system. No validation is performed on the
+        repository file entries, it is assumed the user is familiar with
+        the zypper repository file format."""),
+    'distros': distros,
+    'examples': [dedent("""\
+        zypper:
+          repos:
+            - id: opensuse-oss
+              name: os-oss
+              baseurl: http://dl.opensuse.org/dist/leap/v/repo/oss/
+              enabled: 1
+              autorefresh: 1
+            - id: opensuse-oss-update
+              name: os-oss-up
+              baseurl: http://dl.opensuse.org/dist/leap/v/update
+              # any setting per
+              # https://en.opensuse.org/openSUSE:Standards_RepoInfo
+              # enable and autorefresh are on by default
+          config:
+            reposdir: /etc/zypp/repos.dir
+            servicesdir: /etc/zypp/services.d
+            download.use_deltarpm: true
+            # any setting in /etc/zypp/zypp.conf
+    """)],
+    'frequency': PER_ALWAYS,
+    'type': 'object',
+    'properties': {
+        'zypper': {
+            'type': 'object',
+            'properties': {
+                'repos': {
+                    'type': 'array',
+                    'items': {
+                        'type': 'object',
+                        'properties': {
+                            'id': {
+                                'type': 'string',
+                                'description': dedent("""\
+                                    The unique id of the repo, used when
+                                     writing
+                                    /etc/zypp/repos.d/<id>.repo.""")
+                            },
+                            'baseurl': {
+                                'type': 'string',
+                                'format': 'uri',   # built-in format type
+                                'description': 'The base repositoy URL'
+                            }
+                        },
+                        'required': ['id', 'baseurl'],
+                        'additionalProperties': True
+                    },
+                    'minItems': 1
+                },
+                'config': {
+                    'type': 'object',
+                    'description': dedent("""\
+                        Any supported zypo.conf key is written to
+                        /etc/zypp/zypp.conf'""")
+                }
+            },
+            'required': [],
+            'minProperties': 1,  # Either config or repo must be provided
+            'additionalProperties': False,  # only repos and config allowed
+        }
+    }
+}
+
+__doc__ = get_schema_doc(schema)  # Supplement python help()
+
+LOG = logging.getLogger(__name__)
+
+
+def _canonicalize_id(repo_id):
+    repo_id = repo_id.replace(" ", "_")
+    return repo_id
+
+
+def _format_repo_value(val):
+    if isinstance(val, bool):
+        # zypp prefers 1/0
+        return 1 if val else 0
+    if isinstance(val, (list, tuple)):
+        return "\n    ".join([_format_repo_value(v) for v in val])
+    if not isinstance(val, string_types):
+        return str(val)
+    return val
+
+
+def _format_repository_config(repo_id, repo_config):
+    to_be = configobj.ConfigObj()
+    to_be[repo_id] = {}
+    # Do basic translation of the items -> values
+    for (k, v) in repo_config.items():
+        # For now assume that people using this know the format
+        # of zypper repos  and don't verify keys/values further
+        to_be[repo_id][k] = _format_repo_value(v)
+    lines = to_be.write()
+    return "\n".join(lines)
+
+
+def _write_repos(repos, repo_base_path):
+    """Write the user-provided repo definition files
+    @param repos: A list of repo dictionary objects provided by the user's
+        cloud config.
+    @param repo_base_path: The directory path to which repo definitions are
+        written.
+    """
+
+    if not repos:
+        return
+    valid_repos = {}
+    for index, user_repo_config in enumerate(repos):
+        # Skip on absent required keys
+        missing_keys = set(['id', 'baseurl']).difference(set(user_repo_config))
+        if missing_keys:
+            LOG.warning(
+                "Repo config at index %d is missing required config keys: %s",
+                index, ",".join(missing_keys))
+            continue
+        repo_id = user_repo_config.get('id')
+        canon_repo_id = _canonicalize_id(repo_id)
+        repo_fn_pth = os.path.join(repo_base_path, "%s.repo" % (canon_repo_id))
+        if os.path.exists(repo_fn_pth):
+            LOG.info("Skipping repo %s, file %s already exists!",
+                     repo_id, repo_fn_pth)
+            continue
+        elif repo_id in valid_repos:
+            LOG.info("Skipping repo %s, file %s already pending!",
+                     repo_id, repo_fn_pth)
+            continue
+
+        # Do some basic key formatting
+        repo_config = dict(
+            (k.lower().strip().replace("-", "_"), v)
+            for k, v in user_repo_config.items()
+            if k and k != 'id')
+
+        # Set defaults if not present
+        for field in ['enabled', 'autorefresh']:
+            if field not in repo_config:
+                repo_config[field] = '1'
+
+        valid_repos[repo_id] = (repo_fn_pth, repo_config)
+
+    for (repo_id, repo_data) in valid_repos.items():
+        repo_blob = _format_repository_config(repo_id, repo_data[-1])
+        util.write_file(repo_data[0], repo_blob)
+
+
+def _write_zypp_config(zypper_config):
+    """Write to the default zypp configuration file /etc/zypp/zypp.conf"""
+    if not zypper_config:
+        return
+    zypp_config = '/etc/zypp/zypp.conf'
+    zypp_conf_content = util.load_file(zypp_config)
+    new_settings = ['# Added via cloud.cfg']
+    for setting, value in zypper_config.items():
+        if setting == 'configdir':
+            msg = 'Changing the location of the zypper configuration is '
+            msg += 'not supported, skipping "configdir" setting'
+            LOG.warning(msg)
+            continue
+        if value:
+            new_settings.append('%s=%s' % (setting, value))
+    if len(new_settings) > 1:
+        new_config = zypp_conf_content + '\n'.join(new_settings)
+    else:
+        new_config = zypp_conf_content
+    util.write_file(zypp_config, new_config)
+
+
+def handle(name, cfg, _cloud, log, _args):
+    zypper_section = cfg.get('zypper')
+    if not zypper_section:
+        LOG.debug(("Skipping module named %s,"
+                   " no 'zypper' relevant configuration found"), name)
+        return
+    repos = zypper_section.get('repos')
+    if not repos:
+        LOG.debug(("Skipping module named %s,"
+                   " no 'repos' configuration found"), name)
+        return
+    zypper_config = zypper_section.get('config', {})
+    repo_base_path = zypper_config.get('reposdir', '/etc/zypp/repos.d/')
+
+    _write_zypp_config(zypper_config)
+    _write_repos(repos, repo_base_path)
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/config/schema.py b/cloudinit/config/schema.py
index 6400f00..bb291ff 100644
--- a/cloudinit/config/schema.py
+++ b/cloudinit/config/schema.py
@@ -3,19 +3,24 @@
 
 from __future__ import print_function
 
-from cloudinit.util import read_file_or_url
+from cloudinit import importer
+from cloudinit.util import find_modules, read_file_or_url
 
 import argparse
+from collections import defaultdict
+from copy import deepcopy
 import logging
 import os
+import re
 import sys
 import yaml
 
+_YAML_MAP = {True: 'true', False: 'false', None: 'null'}
 SCHEMA_UNDEFINED = b'UNDEFINED'
 CLOUD_CONFIG_HEADER = b'#cloud-config'
 SCHEMA_DOC_TMPL = """
 {name}
----
+{title_underbar}
 **Summary:** {title}
 
 {description}
@@ -31,6 +36,8 @@ SCHEMA_DOC_TMPL = """
 {examples}
 """
 SCHEMA_PROPERTY_TMPL = '{prefix}**{prop_name}:** ({type}) {description}'
+SCHEMA_EXAMPLES_HEADER = '\n**Examples**::\n\n'
+SCHEMA_EXAMPLES_SPACER_TEMPLATE = '\n    # --- Example{0} ---'
 
 
 class SchemaValidationError(ValueError):
@@ -83,11 +90,49 @@ def validate_cloudconfig_schema(config, schema, strict=False):
             logging.warning('Invalid config:\n%s', '\n'.join(messages))
 
 
-def validate_cloudconfig_file(config_path, schema):
+def annotated_cloudconfig_file(cloudconfig, original_content, schema_errors):
+    """Return contents of the cloud-config file annotated with schema errors.
+
+    @param cloudconfig: YAML-loaded object from the original_content.
+    @param original_content: The contents of a cloud-config file
+    @param schema_errors: List of tuples from a JSONSchemaValidationError. The
+        tuples consist of (schemapath, error_message).
+    """
+    if not schema_errors:
+        return original_content
+    schemapaths = _schemapath_for_cloudconfig(cloudconfig, original_content)
+    errors_by_line = defaultdict(list)
+    error_count = 1
+    error_footer = []
+    annotated_content = []
+    for path, msg in schema_errors:
+        errors_by_line[schemapaths[path]].append(msg)
+        error_footer.append('# E{0}: {1}'.format(error_count, msg))
+        error_count += 1
+    lines = original_content.decode().split('\n')
+    error_count = 1
+    for line_number, line in enumerate(lines):
+        errors = errors_by_line[line_number + 1]
+        if errors:
+            error_label = ','.join(
+                ['E{0}'.format(count + error_count)
+                 for count in range(0, len(errors))])
+            error_count += len(errors)
+            annotated_content.append(line + '\t\t# ' + error_label)
+        else:
+            annotated_content.append(line)
+    annotated_content.append(
+        '# Errors: -------------\n{0}\n\n'.format('\n'.join(error_footer)))
+    return '\n'.join(annotated_content)
+
+
+def validate_cloudconfig_file(config_path, schema, annotate=False):
     """Validate cloudconfig file adheres to a specific jsonschema.
 
     @param config_path: Path to the yaml cloud-config file to parse.
     @param schema: Dict describing a valid jsonschema to validate against.
+    @param annotate: Boolean set True to print original config file with error
+        annotations on the offending lines.
 
     @raises SchemaValidationError containing any of schema_errors encountered.
     @raises RuntimeError when config_path does not exist.
@@ -108,18 +153,83 @@ def validate_cloudconfig_file(config_path, schema):
             ('format', 'File {0} is not valid yaml. {1}'.format(
                 config_path, str(e))),)
         raise SchemaValidationError(errors)
-    validate_cloudconfig_schema(
-        cloudconfig, schema, strict=True)
+
+    try:
+        validate_cloudconfig_schema(
+            cloudconfig, schema, strict=True)
+    except SchemaValidationError as e:
+        if annotate:
+            print(annotated_cloudconfig_file(
+                cloudconfig, content, e.schema_errors))
+        raise
+
+
+def _schemapath_for_cloudconfig(config, original_content):
+    """Return a dictionary mapping schemapath to original_content line number.
+
+    @param config: The yaml.loaded config dictionary of a cloud-config file.
+    @param original_content: The simple file content of the cloud-config file
+    """
+    # FIXME Doesn't handle multi-line lists or multi-line strings
+    content_lines = original_content.decode().split('\n')
+    schema_line_numbers = {}
+    list_index = 0
+    RE_YAML_INDENT = r'^(\s*)'
+    scopes = []
+    for line_number, line in enumerate(content_lines):
+        indent_depth = len(re.match(RE_YAML_INDENT, line).groups()[0])
+        line = line.strip()
+        if not line or line.startswith('#'):
+            continue
+        if scopes:
+            previous_depth, path_prefix = scopes[-1]
+        else:
+            previous_depth = -1
+            path_prefix = ''
+        if line.startswith('- '):
+            key = str(list_index)
+            value = line[1:]
+            list_index += 1
+        else:
+            list_index = 0
+            key, value = line.split(':', 1)
+        while indent_depth <= previous_depth:
+            if scopes:
+                previous_depth, path_prefix = scopes.pop()
+            else:
+                previous_depth = -1
+                path_prefix = ''
+        if path_prefix:
+            key = path_prefix + '.' + key
+        scopes.append((indent_depth, key))
+        if value:
+            value = value.strip()
+            if value.startswith('['):
+                scopes.append((indent_depth + 2, key + '.0'))
+                for inner_list_index in range(0, len(yaml.safe_load(value))):
+                    list_key = key + '.' + str(inner_list_index)
+                    schema_line_numbers[list_key] = line_number + 1
+        schema_line_numbers[key] = line_number + 1
+    return schema_line_numbers
 
 
 def _get_property_type(property_dict):
     """Return a string representing a property type from a given jsonschema."""
     property_type = property_dict.get('type', SCHEMA_UNDEFINED)
+    if property_type == SCHEMA_UNDEFINED and property_dict.get('enum'):
+        property_type = [
+            str(_YAML_MAP.get(k, k)) for k in property_dict['enum']]
     if isinstance(property_type, list):
         property_type = '/'.join(property_type)
-    item_type = property_dict.get('items', {}).get('type')
-    if item_type:
-        property_type = '{0} of {1}'.format(property_type, item_type)
+    items = property_dict.get('items', {})
+    sub_property_type = items.get('type', '')
+    # Collect each item type
+    for sub_item in items.get('oneOf', {}):
+        if sub_property_type:
+            sub_property_type += '/'
+        sub_property_type += '(' + _get_property_type(sub_item) + ')'
+    if sub_property_type:
+        return '{0} of {1}'.format(property_type, sub_property_type)
     return property_type
 
 
@@ -146,12 +256,14 @@ def _get_schema_examples(schema, prefix=''):
     examples = schema.get('examples')
     if not examples:
         return ''
-    rst_content = '\n**Examples**::\n\n'
-    for example in examples:
-        example_yaml = yaml.dump(example, default_flow_style=False)
+    rst_content = SCHEMA_EXAMPLES_HEADER
+    for count, example in enumerate(examples):
         # Python2.6 is missing textwrapper.indent
-        lines = example_yaml.split('\n')
+        lines = example.split('\n')
         indented_lines = ['    {0}'.format(line) for line in lines]
+        if rst_content != SCHEMA_EXAMPLES_HEADER:
+            indented_lines.insert(
+                0, SCHEMA_EXAMPLES_SPACER_TEMPLATE.format(count + 1))
         rst_content += '\n'.join(indented_lines)
     return rst_content
 
@@ -162,61 +274,87 @@ def get_schema_doc(schema):
     @param schema: Dict of jsonschema to render.
     @raise KeyError: If schema lacks an expected key.
     """
-    schema['property_doc'] = _get_property_doc(schema)
-    schema['examples'] = _get_schema_examples(schema)
-    schema['distros'] = ', '.join(schema['distros'])
-    return SCHEMA_DOC_TMPL.format(**schema)
-
-
-def get_schema(section_key=None):
-    """Return a dict of jsonschema defined in any cc_* module.
-
-    @param: section_key: Optionally limit schema to a specific top-level key.
-    """
-    # TODO use util.find_modules in subsequent branch
-    from cloudinit.config.cc_ntp import schema
-    return schema
+    schema_copy = deepcopy(schema)
+    schema_copy['property_doc'] = _get_property_doc(schema)
+    schema_copy['examples'] = _get_schema_examples(schema)
+    schema_copy['distros'] = ', '.join(schema['distros'])
+    # Need an underbar of the same length as the name
+    schema_copy['title_underbar'] = re.sub(r'.', '-', schema['name'])
+    return SCHEMA_DOC_TMPL.format(**schema_copy)
+
+
+FULL_SCHEMA = None
+
+
+def get_schema():
+    """Return jsonschema coalesced from all cc_* cloud-config module."""
+    global FULL_SCHEMA
+    if FULL_SCHEMA:
+        return FULL_SCHEMA
+    full_schema = {
+        '$schema': 'http://json-schema.org/draft-04/schema#',
+        'id': 'cloud-config-schema', 'allOf': []}
+
+    configs_dir = os.path.dirname(os.path.abspath(__file__))
+    potential_handlers = find_modules(configs_dir)
+    for (fname, mod_name) in potential_handlers.items():
+        mod_locs, looked_locs = importer.find_module(
+            mod_name, ['cloudinit.config'], ['schema'])
+        if mod_locs:
+            mod = importer.import_module(mod_locs[0])
+            full_schema['allOf'].append(mod.schema)
+    FULL_SCHEMA = full_schema
+    return full_schema
 
 
 def error(message):
     print(message, file=sys.stderr)
-    return 1
+    sys.exit(1)
 
 
-def get_parser():
+def get_parser(parser=None):
     """Return a parser for supported cmdline arguments."""
-    parser = argparse.ArgumentParser()
+    if not parser:
+        parser = argparse.ArgumentParser(
+            prog='cloudconfig-schema',
+            description='Validate cloud-config files or document schema')
     parser.add_argument('-c', '--config-file',
                         help='Path of the cloud-config yaml file to validate')
     parser.add_argument('-d', '--doc', action="store_true", default=False,
                         help='Print schema documentation')
-    parser.add_argument('-k', '--key',
-                        help='Limit validation or docs to a section key')
+    parser.add_argument('--annotate', action="store_true", default=False,
+                        help='Annotate existing cloud-config file with errors')
     return parser
 
 
-def main():
-    """Tool to validate schema of a cloud-config file or print schema docs."""
-    parser = get_parser()
-    args = parser.parse_args()
+def handle_schema_args(name, args):
+    """Handle provided schema args and perform the appropriate actions."""
     exclusive_args = [args.config_file, args.doc]
     if not any(exclusive_args) or all(exclusive_args):
-        return error('Expected either --config-file argument or --doc')
-
-    schema = get_schema()
+        error('Expected either --config-file argument or --doc')
+    full_schema = get_schema()
     if args.config_file:
         try:
-            validate_cloudconfig_file(args.config_file, schema)
+            validate_cloudconfig_file(
+                args.config_file, full_schema, args.annotate)
         except (SchemaValidationError, RuntimeError) as e:
-            return error(str(e))
-        print("Valid cloud-config file {0}".format(args.config_file))
+            if not args.annotate:
+                error(str(e))
+        else:
+            print("Valid cloud-config file {0}".format(args.config_file))
     if args.doc:
-        print(get_schema_doc(schema))
+        for subschema in full_schema['allOf']:
+            print(get_schema_doc(subschema))
+
+
+def main():
+    """Tool to validate schema of a cloud-config file or print schema docs."""
+    parser = get_parser()
+    handle_schema_args('cloudconfig-schema', parser.parse_args())
     return 0
 
 
 if __name__ == '__main__':
     sys.exit(main())
 
-
 # vi: ts=4 expandtab
diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py
index 1fd48a7..d5becd1 100755
--- a/cloudinit/distros/__init__.py
+++ b/cloudinit/distros/__init__.py
@@ -30,12 +30,16 @@ from cloudinit import util
 from cloudinit.distros.parsers import hosts
 
 
+# Used when a cloud-config module can be run on all cloud-init distibutions.
+# The value 'all' is surfaced in module documentation for distro support.
+ALL_DISTROS = 'all'
+
 OSFAMILIES = {
     'debian': ['debian', 'ubuntu'],
     'redhat': ['centos', 'fedora', 'rhel'],
     'gentoo': ['gentoo'],
     'freebsd': ['freebsd'],
-    'suse': ['sles'],
+    'suse': ['opensuse', 'sles'],
     'arch': ['arch'],
 }
 
@@ -188,6 +192,9 @@ class Distro(object):
     def _get_localhost_ip(self):
         return "127.0.0.1"
 
+    def get_locale(self):
+        raise NotImplementedError()
+
     @abc.abstractmethod
     def _read_hostname(self, filename, default=None):
         raise NotImplementedError()
diff --git a/cloudinit/distros/arch.py b/cloudinit/distros/arch.py
index b4c0ba7..f87a343 100644
--- a/cloudinit/distros/arch.py
+++ b/cloudinit/distros/arch.py
@@ -14,6 +14,8 @@ from cloudinit.distros.parsers.hostname import HostnameConf
 
 from cloudinit.settings import PER_INSTANCE
 
+import os
+
 LOG = logging.getLogger(__name__)
 
 
@@ -52,31 +54,10 @@ class Distro(distros.Distro):
         entries = net_util.translate_network(settings)
         LOG.debug("Translated ubuntu style network settings %s into %s",
                   settings, entries)
-        dev_names = entries.keys()
-        # Format for netctl
-        for (dev, info) in entries.items():
-            nameservers = []
-            net_fn = self.network_conf_dir + dev
-            net_cfg = {
-                'Connection': 'ethernet',
-                'Interface': dev,
-                'IP': info.get('bootproto'),
-                'Address': "('%s/%s')" % (info.get('address'),
-                                          info.get('netmask')),
-                'Gateway': info.get('gateway'),
-                'DNS': str(tuple(info.get('dns-nameservers'))).replace(',', '')
-            }
-            util.write_file(net_fn, convert_netctl(net_cfg))
-            if info.get('auto'):
-                self._enable_interface(dev)
-            if 'dns-nameservers' in info:
-                nameservers.extend(info['dns-nameservers'])
-
-        if nameservers:
-            util.write_file(self.resolve_conf_fn,
-                            convert_resolv_conf(nameservers))
-
-        return dev_names
+        return _render_network(
+            entries, resolv_conf=self.resolve_conf_fn,
+            conf_dir=self.network_conf_dir,
+            enable_func=self._enable_interface)
 
     def _enable_interface(self, device_name):
         cmd = ['netctl', 'reenable', device_name]
@@ -173,13 +154,60 @@ class Distro(distros.Distro):
                          ["-y"], freq=PER_INSTANCE)
 
 
+def _render_network(entries, target="/", conf_dir="etc/netctl",
+                    resolv_conf="etc/resolv.conf", enable_func=None):
+    """Render the translate_network format into netctl files in target.
+    Paths will be rendered under target.
+    """
+
+    devs = []
+    nameservers = []
+    resolv_conf = util.target_path(target, resolv_conf)
+    conf_dir = util.target_path(target, conf_dir)
+
+    for (dev, info) in entries.items():
+        if dev == 'lo':
+            # no configuration should be rendered for 'lo'
+            continue
+        devs.append(dev)
+        net_fn = os.path.join(conf_dir, dev)
+        net_cfg = {
+            'Connection': 'ethernet',
+            'Interface': dev,
+            'IP': info.get('bootproto'),
+            'Address': "%s/%s" % (info.get('address'),
+                                  info.get('netmask')),
+            'Gateway': info.get('gateway'),
+            'DNS': info.get('dns-nameservers', []),
+        }
+        util.write_file(net_fn, convert_netctl(net_cfg))
+        if enable_func and info.get('auto'):
+            enable_func(dev)
+        if 'dns-nameservers' in info:
+            nameservers.extend(info['dns-nameservers'])
+
+    if nameservers:
+        util.write_file(resolv_conf,
+                        convert_resolv_conf(nameservers))
+    return devs
+
+
 def convert_netctl(settings):
-    """Returns a settings string formatted for netctl."""
-    result = ''
-    if isinstance(settings, dict):
-        for k, v in settings.items():
-            result = result + '%s=%s\n' % (k, v)
-        return result
+    """Given a dictionary, returns a string in netctl profile format.
+
+    netctl profile is described at:
+    https://git.archlinux.org/netctl.git/tree/docs/netctl.profile.5.txt
+
+    Note that the 'Special Quoting Rules' are not handled here."""
+    result = []
+    for key in sorted(settings):
+        val = settings[key]
+        if val is None:
+            val = ""
+        elif isinstance(val, (tuple, list)):
+            val = "(" + ' '.join("'%s'" % v for v in val) + ")"
+        result.append("%s=%s\n" % (key, val))
+    return ''.join(result)
 
 
 def convert_resolv_conf(settings):
diff --git a/cloudinit/distros/debian.py b/cloudinit/distros/debian.py
index abfb81f..33cc0bf 100644
--- a/cloudinit/distros/debian.py
+++ b/cloudinit/distros/debian.py
@@ -61,11 +61,49 @@ class Distro(distros.Distro):
         # should only happen say once per instance...)
         self._runner = helpers.Runners(paths)
         self.osfamily = 'debian'
+        self.default_locale = 'en_US.UTF-8'
+        self.system_locale = None
 
-    def apply_locale(self, locale, out_fn=None):
+    def get_locale(self):
+        """Return the default locale if set, else use default locale"""
+
+        # read system locale value
+        if not self.system_locale:
+            self.system_locale = read_system_locale()
+
+        # Return system_locale setting if valid, else use default locale
+        return (self.system_locale if self.system_locale else
+                self.default_locale)
+
+    def apply_locale(self, locale, out_fn=None, keyname='LANG'):
+        """Apply specified locale to system, regenerate if specified locale
+            differs from system default."""
         if not out_fn:
             out_fn = LOCALE_CONF_FN
-        apply_locale(locale, out_fn)
+
+        if not locale:
+            raise ValueError('Failed to provide locale value.')
+
+        # Only call locale regeneration if needed
+        # Update system locale config with specified locale if needed
+        distro_locale = self.get_locale()
+        conf_fn_exists = os.path.exists(out_fn)
+        sys_locale_unset = False if self.system_locale else True
+        need_regen = (locale.lower() != distro_locale.lower() or
+                      not conf_fn_exists or sys_locale_unset)
+        need_conf = not conf_fn_exists or need_regen or sys_locale_unset
+
+        if need_regen:
+            regenerate_locale(locale, out_fn, keyname=keyname)
+        else:
+            LOG.debug(
+                "System has '%s=%s' requested '%s', skipping regeneration.",
+                keyname, self.system_locale, locale)
+
+        if need_conf:
+            update_locale_conf(locale, out_fn, keyname=keyname)
+            # once we've updated the system config, invalidate cache
+            self.system_locale = None
 
     def install_packages(self, pkglist):
         self.update_package_sources()
@@ -218,37 +256,47 @@ def _maybe_remove_legacy_eth0(path="/etc/network/interfaces.d/eth0.cfg"):
     LOG.warning(msg)
 
 
-def apply_locale(locale, sys_path=LOCALE_CONF_FN, keyname='LANG'):
-    """Apply the locale.
-
-    Run locale-gen for the provided locale and set the default
-    system variable `keyname` appropriately in the provided `sys_path`.
-
-    If sys_path indicates that `keyname` is already set to `locale`
-    then no changes will be made and locale-gen not called.
-    This allows images built with a locale already generated to not re-run
-    locale-gen which can be very heavy.
-    """
-    if not locale:
-        raise ValueError('Failed to provide locale value.')
-
+def read_system_locale(sys_path=LOCALE_CONF_FN, keyname='LANG'):
+    """Read system default locale setting, if present"""
+    sys_val = ""
     if not sys_path:
         raise ValueError('Invalid path: %s' % sys_path)
 
     if os.path.exists(sys_path):
         locale_content = util.load_file(sys_path)
-        # if LANG isn't present, regen
         sys_defaults = util.load_shell_content(locale_content)
         sys_val = sys_defaults.get(keyname, "")
-        if sys_val.lower() == locale.lower():
-            LOG.debug(
-                "System has '%s=%s' requested '%s', skipping regeneration.",
-                keyname, sys_val, locale)
-            return
 
-    util.subp(['locale-gen', locale], capture=False)
+    return sys_val
+
+
+def update_locale_conf(locale, sys_path, keyname='LANG'):
+    """Update system locale config"""
+    LOG.debug('Updating %s with locale setting %s=%s',
+              sys_path, keyname, locale)
     util.subp(
         ['update-locale', '--locale-file=' + sys_path,
          '%s=%s' % (keyname, locale)], capture=False)
 
+
+def regenerate_locale(locale, sys_path, keyname='LANG'):
+    """
+    Run locale-gen for the provided locale and set the default
+    system variable `keyname` appropriately in the provided `sys_path`.
+
+    """
+    # special case for locales which do not require regen
+    # % locale -a
+    # C
+    # C.UTF-8
+    # POSIX
+    if locale.lower() in ['c', 'c.utf-8', 'posix']:
+        LOG.debug('%s=%s does not require rengeneration', keyname, locale)
+        return
+
+    # finally, trigger regeneration
+    LOG.debug('Generating locales for %s', locale)
+    util.subp(['locale-gen', locale], capture=False)
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/distros/opensuse.py b/cloudinit/distros/opensuse.py
new file mode 100644
index 0000000..a219e9f
--- /dev/null
+++ b/cloudinit/distros/opensuse.py
@@ -0,0 +1,212 @@
+#    Copyright (C) 2017 SUSE LLC
+#    Copyright (C) 2013 Hewlett-Packard Development Company, L.P.
+#
+#    Author: Robert Schweikert <rjschwei@xxxxxxxx>
+#    Author: Juerg Haefliger <juerg.haefliger@xxxxxx>
+#
+#    Leaning very heavily on the RHEL and Debian implementation
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit import distros
+
+from cloudinit.distros.parsers.hostname import HostnameConf
+
+from cloudinit import helpers
+from cloudinit import log as logging
+from cloudinit import util
+
+from cloudinit.distros import net_util
+from cloudinit.distros import rhel_util as rhutil
+from cloudinit.settings import PER_INSTANCE
+
+LOG = logging.getLogger(__name__)
+
+
+class Distro(distros.Distro):
+    clock_conf_fn = '/etc/sysconfig/clock'
+    hostname_conf_fn = '/etc/HOSTNAME'
+    init_cmd = ['service']
+    locale_conf_fn = '/etc/sysconfig/language'
+    network_conf_fn = '/etc/sysconfig/network'
+    network_script_tpl = '/etc/sysconfig/network/ifcfg-%s'
+    resolve_conf_fn = '/etc/resolv.conf'
+    route_conf_tpl = '/etc/sysconfig/network/ifroute-%s'
+    systemd_hostname_conf_fn = '/etc/hostname'
+    systemd_locale_conf_fn = '/etc/locale.conf'
+    tz_local_fn = '/etc/localtime'
+
+    def __init__(self, name, cfg, paths):
+        distros.Distro.__init__(self, name, cfg, paths)
+        self._runner = helpers.Runners(paths)
+        self.osfamily = 'suse'
+        cfg['ssh_svcname'] = 'sshd'
+        if self.uses_systemd():
+            self.init_cmd = ['systemctl']
+            cfg['ssh_svcname'] = 'sshd.service'
+
+    def apply_locale(self, locale, out_fn=None):
+        if self.uses_systemd():
+            if not out_fn:
+                out_fn = self.systemd_locale_conf_fn
+            locale_cfg = {'LANG': locale}
+        else:
+            if not out_fn:
+                out_fn = self.locale_conf_fn
+            locale_cfg = {'RC_LANG': locale}
+        rhutil.update_sysconfig_file(out_fn, locale_cfg)
+
+    def install_packages(self, pkglist):
+        self.package_command(
+            'install',
+            args='--auto-agree-with-licenses',
+            pkgs=pkglist
+        )
+
+    def package_command(self, command, args=None, pkgs=None):
+        if pkgs is None:
+            pkgs = []
+
+        cmd = ['zypper']
+        # No user interaction possible, enable non-interactive mode
+        cmd.append('--non-interactive')
+
+        # Comand is the operation, such as install
+        if command == 'upgrade':
+            command = 'update'
+        cmd.append(command)
+
+        # args are the arguments to the command, not global options
+        if args and isinstance(args, str):
+            cmd.append(args)
+        elif args and isinstance(args, list):
+            cmd.extend(args)
+
+        pkglist = util.expand_package_list('%s-%s', pkgs)
+        cmd.extend(pkglist)
+
+        # Allow the output of this to flow outwards (ie not be captured)
+        util.subp(cmd, capture=False)
+
+    def set_timezone(self, tz):
+        tz_file = self._find_tz_file(tz)
+        if self.uses_systemd():
+            # Currently, timedatectl complains if invoked during startup
+            # so for compatibility, create the link manually.
+            util.del_file(self.tz_local_fn)
+            util.sym_link(tz_file, self.tz_local_fn)
+        else:
+            # Adjust the sysconfig clock zone setting
+            clock_cfg = {
+                'TIMEZONE': str(tz),
+            }
+            rhutil.update_sysconfig_file(self.clock_conf_fn, clock_cfg)
+            # This ensures that the correct tz will be used for the system
+            util.copy(tz_file, self.tz_local_fn)
+
+    def update_package_sources(self):
+        self._runner.run("update-sources", self.package_command,
+                         ['refresh'], freq=PER_INSTANCE)
+
+    def _bring_up_interfaces(self, device_names):
+        if device_names and 'all' in device_names:
+            raise RuntimeError(('Distro %s can not translate '
+                                'the device name "all"') % (self.name))
+        return distros.Distro._bring_up_interfaces(self, device_names)
+
+    def _read_hostname(self, filename, default=None):
+        if self.uses_systemd() and filename.endswith('/previous-hostname'):
+            return util.load_file(filename).strip()
+        elif self.uses_systemd():
+            (out, _err) = util.subp(['hostname'])
+            if len(out):
+                return out
+            else:
+                return default
+        else:
+            try:
+                conf = self._read_hostname_conf(filename)
+                hostname = conf.hostname
+            except IOError:
+                pass
+            if not hostname:
+                return default
+            return hostname
+
+    def _read_hostname_conf(self, filename):
+        conf = HostnameConf(util.load_file(filename))
+        conf.parse()
+        return conf
+
+    def _read_system_hostname(self):
+        if self.uses_systemd():
+            host_fn = self.systemd_hostname_conf_fn
+        else:
+            host_fn = self.hostname_conf_fn
+        return (host_fn, self._read_hostname(host_fn))
+
+    def _write_hostname(self, hostname, out_fn):
+        if self.uses_systemd() and out_fn.endswith('/previous-hostname'):
+            util.write_file(out_fn, hostname)
+        elif self.uses_systemd():
+            util.subp(['hostnamectl', 'set-hostname', str(hostname)])
+        else:
+            conf = None
+            try:
+                # Try to update the previous one
+                # so lets see if we can read it first.
+                conf = self._read_hostname_conf(out_fn)
+            except IOError:
+                pass
+            if not conf:
+                conf = HostnameConf('')
+            conf.set_hostname(hostname)
+            util.write_file(out_fn, str(conf), 0o644)
+
+    def _write_network(self, settings):
+        # Convert debian settings to ifcfg format
+        entries = net_util.translate_network(settings)
+        LOG.debug("Translated ubuntu style network settings %s into %s",
+                  settings, entries)
+        # Make the intermediate format as the suse format...
+        nameservers = []
+        searchservers = []
+        dev_names = entries.keys()
+        for (dev, info) in entries.items():
+            net_fn = self.network_script_tpl % (dev)
+            route_fn = self.route_conf_tpl % (dev)
+            mode = None
+            if info.get('auto', None):
+                mode = 'auto'
+            else:
+                mode = 'manual'
+            bootproto = info.get('bootproto', None)
+            gateway = info.get('gateway', None)
+            net_cfg = {
+                'BOOTPROTO': bootproto,
+                'BROADCAST': info.get('broadcast'),
+                'GATEWAY': gateway,
+                'IPADDR': info.get('address'),
+                'LLADDR': info.get('hwaddress'),
+                'NETMASK': info.get('netmask'),
+                'STARTMODE': mode,
+                'USERCONTROL': 'no'
+            }
+            if dev != 'lo':
+                net_cfg['ETHTOOL_OPTIONS'] = ''
+            else:
+                net_cfg['FIREWALL'] = 'no'
+            rhutil.update_sysconfig_file(net_fn, net_cfg, True)
+            if gateway and bootproto == 'static':
+                default_route = 'default    %s' % gateway
+                util.write_file(route_fn, default_route, 0o644)
+            if 'dns-nameservers' in info:
+                nameservers.extend(info['dns-nameservers'])
+            if 'dns-search' in info:
+                searchservers.extend(info['dns-search'])
+        if nameservers or searchservers:
+            rhutil.update_resolve_conf_file(self.resolve_conf_fn,
+                                            nameservers, searchservers)
+        return dev_names
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/distros/sles.py b/cloudinit/distros/sles.py
index dbec2ed..6e336cb 100644
--- a/cloudinit/distros/sles.py
+++ b/cloudinit/distros/sles.py
@@ -1,167 +1,17 @@
-# Copyright (C) 2013 Hewlett-Packard Development Company, L.P.
+#    Copyright (C) 2017 SUSE LLC
 #
-# Author: Juerg Haefliger <juerg.haefliger@xxxxxx>
+#    Author: Robert Schweikert <rjschwei@xxxxxxxx>
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from cloudinit import distros
+from cloudinit.distros import opensuse
 
-from cloudinit.distros.parsers.hostname import HostnameConf
-
-from cloudinit import helpers
 from cloudinit import log as logging
-from cloudinit import util
-
-from cloudinit.distros import net_util
-from cloudinit.distros import rhel_util
-from cloudinit.settings import PER_INSTANCE
 
 LOG = logging.getLogger(__name__)
 
 
-class Distro(distros.Distro):
-    clock_conf_fn = '/etc/sysconfig/clock'
-    locale_conf_fn = '/etc/sysconfig/language'
-    network_conf_fn = '/etc/sysconfig/network'
-    hostname_conf_fn = '/etc/HOSTNAME'
-    network_script_tpl = '/etc/sysconfig/network/ifcfg-%s'
-    resolve_conf_fn = '/etc/resolv.conf'
-    tz_local_fn = '/etc/localtime'
-
-    def __init__(self, name, cfg, paths):
-        distros.Distro.__init__(self, name, cfg, paths)
-        # This will be used to restrict certain
-        # calls from repeatly happening (when they
-        # should only happen say once per instance...)
-        self._runner = helpers.Runners(paths)
-        self.osfamily = 'suse'
-
-    def install_packages(self, pkglist):
-        self.package_command('install', args='-l', pkgs=pkglist)
-
-    def _write_network(self, settings):
-        # Convert debian settings to ifcfg format
-        entries = net_util.translate_network(settings)
-        LOG.debug("Translated ubuntu style network settings %s into %s",
-                  settings, entries)
-        # Make the intermediate format as the suse format...
-        nameservers = []
-        searchservers = []
-        dev_names = entries.keys()
-        for (dev, info) in entries.items():
-            net_fn = self.network_script_tpl % (dev)
-            mode = info.get('auto')
-            if mode and mode.lower() == 'true':
-                mode = 'auto'
-            else:
-                mode = 'manual'
-            net_cfg = {
-                'BOOTPROTO': info.get('bootproto'),
-                'BROADCAST': info.get('broadcast'),
-                'GATEWAY': info.get('gateway'),
-                'IPADDR': info.get('address'),
-                'LLADDR': info.get('hwaddress'),
-                'NETMASK': info.get('netmask'),
-                'STARTMODE': mode,
-                'USERCONTROL': 'no'
-            }
-            if dev != 'lo':
-                net_cfg['ETHERDEVICE'] = dev
-                net_cfg['ETHTOOL_OPTIONS'] = ''
-            else:
-                net_cfg['FIREWALL'] = 'no'
-            rhel_util.update_sysconfig_file(net_fn, net_cfg, True)
-            if 'dns-nameservers' in info:
-                nameservers.extend(info['dns-nameservers'])
-            if 'dns-search' in info:
-                searchservers.extend(info['dns-search'])
-        if nameservers or searchservers:
-            rhel_util.update_resolve_conf_file(self.resolve_conf_fn,
-                                               nameservers, searchservers)
-        return dev_names
-
-    def apply_locale(self, locale, out_fn=None):
-        if not out_fn:
-            out_fn = self.locale_conf_fn
-        locale_cfg = {
-            'RC_LANG': locale,
-        }
-        rhel_util.update_sysconfig_file(out_fn, locale_cfg)
-
-    def _write_hostname(self, hostname, out_fn):
-        conf = None
-        try:
-            # Try to update the previous one
-            # so lets see if we can read it first.
-            conf = self._read_hostname_conf(out_fn)
-        except IOError:
-            pass
-        if not conf:
-            conf = HostnameConf('')
-        conf.set_hostname(hostname)
-        util.write_file(out_fn, str(conf), 0o644)
-
-    def _read_system_hostname(self):
-        host_fn = self.hostname_conf_fn
-        return (host_fn, self._read_hostname(host_fn))
-
-    def _read_hostname_conf(self, filename):
-        conf = HostnameConf(util.load_file(filename))
-        conf.parse()
-        return conf
-
-    def _read_hostname(self, filename, default=None):
-        hostname = None
-        try:
-            conf = self._read_hostname_conf(filename)
-            hostname = conf.hostname
-        except IOError:
-            pass
-        if not hostname:
-            return default
-        return hostname
-
-    def _bring_up_interfaces(self, device_names):
-        if device_names and 'all' in device_names:
-            raise RuntimeError(('Distro %s can not translate '
-                                'the device name "all"') % (self.name))
-        return distros.Distro._bring_up_interfaces(self, device_names)
-
-    def set_timezone(self, tz):
-        tz_file = self._find_tz_file(tz)
-        # Adjust the sysconfig clock zone setting
-        clock_cfg = {
-            'TIMEZONE': str(tz),
-        }
-        rhel_util.update_sysconfig_file(self.clock_conf_fn, clock_cfg)
-        # This ensures that the correct tz will be used for the system
-        util.copy(tz_file, self.tz_local_fn)
-
-    def package_command(self, command, args=None, pkgs=None):
-        if pkgs is None:
-            pkgs = []
-
-        cmd = ['zypper']
-        # No user interaction possible, enable non-interactive mode
-        cmd.append('--non-interactive')
-
-        # Comand is the operation, such as install
-        cmd.append(command)
-
-        # args are the arguments to the command, not global options
-        if args and isinstance(args, str):
-            cmd.append(args)
-        elif args and isinstance(args, list):
-            cmd.extend(args)
-
-        pkglist = util.expand_package_list('%s-%s', pkgs)
-        cmd.extend(pkglist)
-
-        # Allow the output of this to flow outwards (ie not be captured)
-        util.subp(cmd, capture=False)
-
-    def update_package_sources(self):
-        self._runner.run("update-sources", self.package_command,
-                         ['refresh'], freq=PER_INSTANCE)
+class Distro(opensuse.Distro):
+    pass
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/helpers.py b/cloudinit/helpers.py
index f01021a..1979cd9 100644
--- a/cloudinit/helpers.py
+++ b/cloudinit/helpers.py
@@ -13,7 +13,7 @@ from time import time
 import contextlib
 import os
 
-import six
+from six import StringIO
 from six.moves.configparser import (
     NoSectionError, NoOptionError, RawConfigParser)
 
@@ -441,12 +441,12 @@ class DefaultingConfigParser(RawConfigParser):
 
     def stringify(self, header=None):
         contents = ''
-        with six.StringIO() as outputstream:
-            self.write(outputstream)
-            outputstream.flush()
-            contents = outputstream.getvalue()
-            if header:
-                contents = "\n".join([header, contents])
+        outputstream = StringIO()
+        self.write(outputstream)
+        outputstream.flush()
+        contents = outputstream.getvalue()
+        if header:
+            contents = '\n'.join([header, contents, ''])
         return contents
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/log.py b/cloudinit/log.py
index 3861709..1d75c9f 100644
--- a/cloudinit/log.py
+++ b/cloudinit/log.py
@@ -19,6 +19,8 @@ import sys
 import six
 from six import StringIO
 
+import time
+
 # Logging levels for easy access
 CRITICAL = logging.CRITICAL
 FATAL = logging.FATAL
@@ -32,6 +34,9 @@ NOTSET = logging.NOTSET
 # Default basic format
 DEF_CON_FORMAT = '%(asctime)s - %(filename)s[%(levelname)s]: %(message)s'
 
+# Always format logging timestamps as UTC time
+logging.Formatter.converter = time.gmtime
+
 
 def setupBasicLogging(level=DEBUG):
     root = logging.getLogger()
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index 46cb9c8..a1b0db1 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -175,13 +175,8 @@ def is_disabled_cfg(cfg):
     return cfg.get('config') == "disabled"
 
 
-def generate_fallback_config(blacklist_drivers=None, config_driver=None):
-    """Determine which attached net dev is most likely to have a connection and
-       generate network state to run dhcp on that interface"""
-
-    if not config_driver:
-        config_driver = False
-
+def find_fallback_nic(blacklist_drivers=None):
+    """Return the name of the 'fallback' network device."""
     if not blacklist_drivers:
         blacklist_drivers = []
 
@@ -233,15 +228,24 @@ def generate_fallback_config(blacklist_drivers=None, config_driver=None):
     if DEFAULT_PRIMARY_INTERFACE in names:
         names.remove(DEFAULT_PRIMARY_INTERFACE)
         names.insert(0, DEFAULT_PRIMARY_INTERFACE)
-    target_name = None
-    target_mac = None
+
+    # pick the first that has a mac-address
     for name in names:
-        mac = read_sys_net_safe(name, 'address')
-        if mac:
-            target_name = name
-            target_mac = mac
-            break
-    if target_mac and target_name:
+        if read_sys_net_safe(name, 'address'):
+            return name
+    return None
+
+
+def generate_fallback_config(blacklist_drivers=None, config_driver=None):
+    """Determine which attached net dev is most likely to have a connection and
+       generate network state to run dhcp on that interface"""
+
+    if not config_driver:
+        config_driver = False
+
+    target_name = find_fallback_nic(blacklist_drivers=blacklist_drivers)
+    if target_name:
+        target_mac = read_sys_net_safe(target_name, 'address')
         nconf = {'config': [], 'version': 1}
         cfg = {'type': 'physical', 'name': target_name,
                'mac_address': target_mac, 'subnets': [{'type': 'dhcp'}]}
@@ -511,21 +515,7 @@ def get_interfaces_by_mac():
 
     Bridges and any devices that have a 'stolen' mac are excluded."""
     ret = {}
-    devs = get_devicelist()
-    empty_mac = '00:00:00:00:00:00'
-    for name in devs:
-        if not interface_has_own_mac(name):
-            continue
-        if is_bridge(name):
-            continue
-        if is_vlan(name):
-            continue
-        mac = get_interface_mac(name)
-        # some devices may not have a mac (tun0)
-        if not mac:
-            continue
-        if mac == empty_mac and name != 'lo':
-            continue
+    for name, mac, _driver, _devid in get_interfaces():
         if mac in ret:
             raise RuntimeError(
                 "duplicate mac found! both '%s' and '%s' have mac '%s'" %
@@ -599,6 +589,7 @@ class EphemeralIPv4Network(object):
             self._bringup_router()
 
     def __exit__(self, excp_type, excp_value, excp_traceback):
+        """Teardown anything we set up."""
         for cmd in self.cleanup_cmds:
             util.subp(cmd, capture=True)
 
diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
new file mode 100644
index 0000000..0cba703
--- /dev/null
+++ b/cloudinit/net/dhcp.py
@@ -0,0 +1,163 @@
+# Copyright (C) 2017 Canonical Ltd.
+#
+# Author: Chad Smith <chad.smith@xxxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import configobj
+import logging
+import os
+import re
+
+from cloudinit.net import find_fallback_nic, get_devicelist
+from cloudinit import temp_utils
+from cloudinit import util
+from six import StringIO
+
+LOG = logging.getLogger(__name__)
+
+NETWORKD_LEASES_DIR = '/run/systemd/netif/leases'
+
+
+class InvalidDHCPLeaseFileError(Exception):
+    """Raised when parsing an empty or invalid dhcp.leases file.
+
+    Current uses are DataSourceAzure and DataSourceEc2 during ephemeral
+    boot to scrape metadata.
+    """
+    pass
+
+
+def maybe_perform_dhcp_discovery(nic=None):
+    """Perform dhcp discovery if nic valid and dhclient command exists.
+
+    If the nic is invalid or undiscoverable or dhclient command is not found,
+    skip dhcp_discovery and return an empty dict.
+
+    @param nic: Name of the network interface we want to run dhclient on.
+    @return: A dict of dhcp options from the dhclient discovery if run,
+        otherwise an empty dict is returned.
+    """
+    if nic is None:
+        nic = find_fallback_nic()
+        if nic is None:
+            LOG.debug(
+                'Skip dhcp_discovery: Unable to find fallback nic.')
+            return {}
+    elif nic not in get_devicelist():
+        LOG.debug(
+            'Skip dhcp_discovery: nic %s not found in get_devicelist.', nic)
+        return {}
+    dhclient_path = util.which('dhclient')
+    if not dhclient_path:
+        LOG.debug('Skip dhclient configuration: No dhclient command found.')
+        return {}
+    with temp_utils.tempdir(prefix='cloud-init-dhcp-', needs_exe=True) as tdir:
+        # Use /var/tmp because /run/cloud-init/tmp is mounted noexec
+        return dhcp_discovery(dhclient_path, nic, tdir)
+
+
+def parse_dhcp_lease_file(lease_file):
+    """Parse the given dhcp lease file for the most recent lease.
+
+    Return a dict of dhcp options as key value pairs for the most recent lease
+    block.
+
+    @raises: InvalidDHCPLeaseFileError on empty of unparseable leasefile
+        content.
+    """
+    lease_regex = re.compile(r"lease {(?P<lease>[^}]*)}\n")
+    dhcp_leases = []
+    lease_content = util.load_file(lease_file)
+    if len(lease_content) == 0:
+        raise InvalidDHCPLeaseFileError(
+            'Cannot parse empty dhcp lease file {0}'.format(lease_file))
+    for lease in lease_regex.findall(lease_content):
+        lease_options = []
+        for line in lease.split(';'):
+            # Strip newlines, double-quotes and option prefix
+            line = line.strip().replace('"', '').replace('option ', '')
+            if not line:
+                continue
+            lease_options.append(line.split(' ', 1))
+        dhcp_leases.append(dict(lease_options))
+    if not dhcp_leases:
+        raise InvalidDHCPLeaseFileError(
+            'Cannot parse dhcp lease file {0}. No leases found'.format(
+                lease_file))
+    return dhcp_leases
+
+
+def dhcp_discovery(dhclient_cmd_path, interface, cleandir):
+    """Run dhclient on the interface without scripts or filesystem artifacts.
+
+    @param dhclient_cmd_path: Full path to the dhclient used.
+    @param interface: Name of the network inteface on which to dhclient.
+    @param cleandir: The directory from which to run dhclient as well as store
+        dhcp leases.
+
+    @return: A dict of dhcp options parsed from the dhcp.leases file or empty
+        dict.
+    """
+    LOG.debug('Performing a dhcp discovery on %s', interface)
+
+    # XXX We copy dhclient out of /sbin/dhclient to avoid dealing with strict
+    # app armor profiles which disallow running dhclient -sf <our-script-file>.
+    # We want to avoid running /sbin/dhclient-script because of side-effects in
+    # /etc/resolv.conf any any other vendor specific scripts in
+    # /etc/dhcp/dhclient*hooks.d.
+    sandbox_dhclient_cmd = os.path.join(cleandir, 'dhclient')
+    util.copy(dhclient_cmd_path, sandbox_dhclient_cmd)
+    pid_file = os.path.join(cleandir, 'dhclient.pid')
+    lease_file = os.path.join(cleandir, 'dhcp.leases')
+
+    # ISC dhclient needs the interface up to send initial discovery packets.
+    # Generally dhclient relies on dhclient-script PREINIT action to bring the
+    # link up before attempting discovery. Since we are using -sf /bin/true,
+    # we need to do that "link up" ourselves first.
+    util.subp(['ip', 'link', 'set', 'dev', interface, 'up'], capture=True)
+    cmd = [sandbox_dhclient_cmd, '-1', '-v', '-lf', lease_file,
+           '-pf', pid_file, interface, '-sf', '/bin/true']
+    util.subp(cmd, capture=True)
+    return parse_dhcp_lease_file(lease_file)
+
+
+def networkd_parse_lease(content):
+    """Parse a systemd lease file content as in /run/systemd/netif/leases/
+
+    Parse this (almost) ini style file even though it says:
+      # This is private data. Do not parse.
+
+    Simply return a dictionary of key/values."""
+
+    return dict(configobj.ConfigObj(StringIO(content), list_values=False))
+
+
+def networkd_load_leases(leases_d=None):
+    """Return a dictionary of dictionaries representing each lease
+    found in lease_d.i
+
+    The top level key will be the filename, which is typically the ifindex."""
+
+    if leases_d is None:
+        leases_d = NETWORKD_LEASES_DIR
+
+    ret = {}
+    if not os.path.isdir(leases_d):
+        return ret
+    for lfile in os.listdir(leases_d):
+        ret[lfile] = networkd_parse_lease(
+            util.load_file(os.path.join(leases_d, lfile)))
+    return ret
+
+
+def networkd_get_option_from_leases(keyname, leases_d=None):
+    if leases_d is None:
+        leases_d = NETWORKD_LEASES_DIR
+    leases = networkd_load_leases(leases_d=leases_d)
+    for ifindex, data in sorted(leases.items()):
+        if data.get(keyname):
+            return data[keyname]
+    return None
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py
index bb80ec0..c6a71d1 100644
--- a/cloudinit/net/eni.py
+++ b/cloudinit/net/eni.py
@@ -95,6 +95,9 @@ def _iface_add_attrs(iface, index):
         ignore_map.append('mac_address')
 
     for key, value in iface.items():
+        # convert bool to string for eni
+        if type(value) == bool:
+            value = 'on' if iface[key] else 'off'
         if not value or key in ignore_map:
             continue
         if key in multiline_keys:
diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py
index 9f35b72..d3788af 100644
--- a/cloudinit/net/netplan.py
+++ b/cloudinit/net/netplan.py
@@ -4,7 +4,7 @@ import copy
 import os
 
 from . import renderer
-from .network_state import subnet_is_ipv6
+from .network_state import subnet_is_ipv6, NET_CONFIG_TO_V2
 
 from cloudinit import log as logging
 from cloudinit import util
@@ -27,31 +27,6 @@ network:
 """
 
 LOG = logging.getLogger(__name__)
-NET_CONFIG_TO_V2 = {
-    'bond': {'bond-ad-select': 'ad-select',
-             'bond-arp-interval': 'arp-interval',
-             'bond-arp-ip-target': 'arp-ip-target',
-             'bond-arp-validate': 'arp-validate',
-             'bond-downdelay': 'down-delay',
-             'bond-fail-over-mac': 'fail-over-mac-policy',
-             'bond-lacp-rate': 'lacp-rate',
-             'bond-miimon': 'mii-monitor-interval',
-             'bond-min-links': 'min-links',
-             'bond-mode': 'mode',
-             'bond-num-grat-arp': 'gratuitious-arp',
-             'bond-primary-reselect': 'primary-reselect-policy',
-             'bond-updelay': 'up-delay',
-             'bond-xmit-hash-policy': 'transmit-hash-policy'},
-    'bridge': {'bridge_ageing': 'ageing-time',
-               'bridge_bridgeprio': 'priority',
-               'bridge_fd': 'forward-delay',
-               'bridge_gcint': None,
-               'bridge_hello': 'hello-time',
-               'bridge_maxage': 'max-age',
-               'bridge_maxwait': None,
-               'bridge_pathcost': 'path-cost',
-               'bridge_portprio': None,
-               'bridge_waitport': None}}
 
 
 def _get_params_dict_by_match(config, match):
@@ -247,6 +222,14 @@ class Renderer(renderer.Renderer):
             util.subp(cmd, capture=True)
 
     def _render_content(self, network_state):
+
+        # if content already in netplan format, pass it back
+        if network_state.version == 2:
+            LOG.debug('V2 to V2 passthrough')
+            return util.yaml_dumps({'network': network_state.config},
+                                   explicit_start=False,
+                                   explicit_end=False)
+
         ethernets = {}
         wifis = {}
         bridges = {}
@@ -261,9 +244,9 @@ class Renderer(renderer.Renderer):
 
         for config in network_state.iter_interfaces():
             ifname = config.get('name')
-            # filter None entries up front so we can do simple if key in dict
+            # filter None (but not False) entries up front
             ifcfg = dict((key, value) for (key, value) in config.items()
-                         if value)
+                         if value is not None)
 
             if_type = ifcfg.get('type')
             if if_type == 'physical':
@@ -335,6 +318,7 @@ class Renderer(renderer.Renderer):
                             (port, cost) = costval.split()
                             newvalue[port] = int(cost)
                         br_config.update({newname: newvalue})
+
                 if len(br_config) > 0:
                     bridge.update({'parameters': br_config})
                 _extract_addresses(ifcfg, bridge)
diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py
index 87a7222..0e830ee 100644
--- a/cloudinit/net/network_state.py
+++ b/cloudinit/net/network_state.py
@@ -23,6 +23,34 @@ NETWORK_V2_KEY_FILTER = [
     'match', 'mtu', 'nameservers', 'renderer', 'set-name', 'wakeonlan'
 ]
 
+NET_CONFIG_TO_V2 = {
+    'bond': {'bond-ad-select': 'ad-select',
+             'bond-arp-interval': 'arp-interval',
+             'bond-arp-ip-target': 'arp-ip-target',
+             'bond-arp-validate': 'arp-validate',
+             'bond-downdelay': 'down-delay',
+             'bond-fail-over-mac': 'fail-over-mac-policy',
+             'bond-lacp-rate': 'lacp-rate',
+             'bond-miimon': 'mii-monitor-interval',
+             'bond-min-links': 'min-links',
+             'bond-mode': 'mode',
+             'bond-num-grat-arp': 'gratuitious-arp',
+             'bond-primary': 'primary',
+             'bond-primary-reselect': 'primary-reselect-policy',
+             'bond-updelay': 'up-delay',
+             'bond-xmit-hash-policy': 'transmit-hash-policy'},
+    'bridge': {'bridge_ageing': 'ageing-time',
+               'bridge_bridgeprio': 'priority',
+               'bridge_fd': 'forward-delay',
+               'bridge_gcint': None,
+               'bridge_hello': 'hello-time',
+               'bridge_maxage': 'max-age',
+               'bridge_maxwait': None,
+               'bridge_pathcost': 'path-cost',
+               'bridge_portprio': None,
+               'bridge_stp': 'stp',
+               'bridge_waitport': None}}
+
 
 def parse_net_config_data(net_config, skip_broken=True):
     """Parses the config, returns NetworkState object
@@ -120,6 +148,10 @@ class NetworkState(object):
         self.use_ipv6 = network_state.get('use_ipv6', False)
 
     @property
+    def config(self):
+        return self._network_state['config']
+
+    @property
     def version(self):
         return self._version
 
@@ -166,12 +198,14 @@ class NetworkStateInterpreter(object):
             'search': [],
         },
         'use_ipv6': False,
+        'config': None,
     }
 
     def __init__(self, version=NETWORK_STATE_VERSION, config=None):
         self._version = version
         self._config = config
         self._network_state = copy.deepcopy(self.initial_network_state)
+        self._network_state['config'] = config
         self._parsed = False
 
     @property
@@ -432,6 +466,18 @@ class NetworkStateInterpreter(object):
         for param, val in command.get('params', {}).items():
             iface.update({param: val})
 
+        # convert value to boolean
+        bridge_stp = iface.get('bridge_stp')
+        if bridge_stp is not None and type(bridge_stp) != bool:
+            if bridge_stp in ['on', '1', 1]:
+                bridge_stp = True
+            elif bridge_stp in ['off', '0', 0]:
+                bridge_stp = False
+            else:
+                raise ValueError("Cannot convert bridge_stp value"
+                                 "(%s) to boolean", bridge_stp)
+            iface.update({'bridge_stp': bridge_stp})
+
         interfaces.update({iface['name']: iface})
 
     @ensure_command_keys(['address'])
@@ -460,12 +506,15 @@ class NetworkStateInterpreter(object):
         v2_command = {
           bond0: {
             'interfaces': ['interface0', 'interface1'],
-            'miimon': 100,
-            'mode': '802.3ad',
-            'xmit_hash_policy': 'layer3+4'},
+            'parameters': {
+               'mii-monitor-interval': 100,
+               'mode': '802.3ad',
+               'xmit_hash_policy': 'layer3+4'}},
           bond1: {
             'bond-slaves': ['interface2', 'interface7'],
-            'mode': 1
+            'parameters': {
+                'mode': 1,
+            }
           }
         }
 
@@ -489,8 +538,8 @@ class NetworkStateInterpreter(object):
         v2_command = {
           br0: {
             'interfaces': ['interface0', 'interface1'],
-            'fd': 0,
-            'stp': 'off',
+            'forward-delay': 0,
+            'stp': False,
             'maxwait': 0,
           }
         }
@@ -554,6 +603,7 @@ class NetworkStateInterpreter(object):
             if not mac_address:
                 LOG.debug('NetworkState Version2: missing "macaddress" info '
                           'in config entry: %s: %s', eth, str(cfg))
+            phy_cmd.update({'mac_address': mac_address})
 
             for key in ['mtu', 'match', 'wakeonlan']:
                 if key in cfg:
@@ -598,8 +648,8 @@ class NetworkStateInterpreter(object):
             self.handle_vlan(vlan_cmd)
 
     def handle_wifis(self, command):
-        raise NotImplementedError("NetworkState V2: "
-                                  "Skipping wifi configuration")
+        LOG.warning('Wifi configuration is only available to distros with'
+                    'netplan rendering support.')
 
     def _v2_common(self, cfg):
         LOG.debug('v2_common: handling config:\n%s', cfg)
@@ -616,6 +666,11 @@ class NetworkStateInterpreter(object):
 
     def _handle_bond_bridge(self, command, cmd_type=None):
         """Common handler for bond and bridge types"""
+
+        # inverse mapping for v2 keynames to v1 keynames
+        v2key_to_v1 = dict((v, k) for k, v in
+                           NET_CONFIG_TO_V2.get(cmd_type).items())
+
         for item_name, item_cfg in command.items():
             item_params = dict((key, value) for (key, value) in
                                item_cfg.items() if key not in
@@ -624,14 +679,20 @@ class NetworkStateInterpreter(object):
                 'type': cmd_type,
                 'name': item_name,
                 cmd_type + '_interfaces': item_cfg.get('interfaces'),
-                'params': item_params,
+                'params': dict((v2key_to_v1[k], v) for k, v in
+                               item_params.get('parameters', {}).items())
             }
             subnets = self._v2_to_v1_ipcfg(item_cfg)
             if len(subnets) > 0:
                 v1_cmd.update({'subnets': subnets})
 
-            LOG.debug('v2(%ss) -> v1(%s):\n%s', cmd_type, cmd_type, v1_cmd)
-            self.handle_bridge(v1_cmd)
+            LOG.debug('v2(%s) -> v1(%s):\n%s', cmd_type, cmd_type, v1_cmd)
+            if cmd_type == "bridge":
+                self.handle_bridge(v1_cmd)
+            elif cmd_type == "bond":
+                self.handle_bond(v1_cmd)
+            else:
+                raise ValueError('Unknown command type: %s', cmd_type)
 
     def _v2_to_v1_ipcfg(self, cfg):
         """Common ipconfig extraction from v2 to v1 subnets array."""
@@ -651,12 +712,6 @@ class NetworkStateInterpreter(object):
                 'address': address,
             }
 
-            routes = []
-            for route in cfg.get('routes', []):
-                routes.append(_normalize_route(
-                    {'address': route.get('to'), 'gateway': route.get('via')}))
-            subnet['routes'] = routes
-
             if ":" in address:
                 if 'gateway6' in cfg and gateway6 is None:
                     gateway6 = cfg.get('gateway6')
@@ -667,6 +722,17 @@ class NetworkStateInterpreter(object):
                     subnet.update({'gateway': gateway4})
 
             subnets.append(subnet)
+
+        routes = []
+        for route in cfg.get('routes', []):
+            routes.append(_normalize_route(
+                {'destination': route.get('to'), 'gateway': route.get('via')}))
+
+        # v2 routes are bound to the interface, in v1 we add them under
+        # the first subnet since there isn't an equivalent interface level.
+        if len(subnets) and len(routes):
+            subnets[0]['routes'] = routes
+
         return subnets
 
 
@@ -721,7 +787,7 @@ def _normalize_net_keys(network, address_keys=()):
     elif netmask:
         prefix = mask_to_net_prefix(netmask)
     elif 'prefix' in net:
-        prefix = int(prefix)
+        prefix = int(net['prefix'])
     else:
         prefix = 64 if ipv6 else 24
 
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index a550f97..f572796 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -484,7 +484,11 @@ class Renderer(renderer.Renderer):
             content.add_nameserver(nameserver)
         for searchdomain in network_state.dns_searchdomains:
             content.add_search_domain(searchdomain)
-        return "\n".join([_make_header(';'), str(content)])
+        header = _make_header(';')
+        content_str = str(content)
+        if not content_str.startswith(header):
+            content_str = header + '\n' + content_str
+        return content_str
 
     @staticmethod
     def _render_networkmanager_conf(network_state):
diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py
new file mode 100644
index 0000000..1c1f504
--- /dev/null
+++ b/cloudinit/net/tests/test_dhcp.py
@@ -0,0 +1,260 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import mock
+import os
+from textwrap import dedent
+
+from cloudinit.net.dhcp import (
+    InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,
+    parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases)
+from cloudinit.util import ensure_file, write_file
+from cloudinit.tests.helpers import CiTestCase, wrap_and_call, populate_dir
+
+
+class TestParseDHCPLeasesFile(CiTestCase):
+
+    def test_parse_empty_lease_file_errors(self):
+        """parse_dhcp_lease_file errors when file content is empty."""
+        empty_file = self.tmp_path('leases')
+        ensure_file(empty_file)
+        with self.assertRaises(InvalidDHCPLeaseFileError) as context_manager:
+            parse_dhcp_lease_file(empty_file)
+        error = context_manager.exception
+        self.assertIn('Cannot parse empty dhcp lease file', str(error))
+
+    def test_parse_malformed_lease_file_content_errors(self):
+        """parse_dhcp_lease_file errors when file content isn't dhcp leases."""
+        non_lease_file = self.tmp_path('leases')
+        write_file(non_lease_file, 'hi mom.')
+        with self.assertRaises(InvalidDHCPLeaseFileError) as context_manager:
+            parse_dhcp_lease_file(non_lease_file)
+        error = context_manager.exception
+        self.assertIn('Cannot parse dhcp lease file', str(error))
+
+    def test_parse_multiple_leases(self):
+        """parse_dhcp_lease_file returns a list of all leases within."""
+        lease_file = self.tmp_path('leases')
+        content = dedent("""
+            lease {
+              interface "wlp3s0";
+              fixed-address 192.168.2.74;
+              option subnet-mask 255.255.255.0;
+              option routers 192.168.2.1;
+              renew 4 2017/07/27 18:02:30;
+              expire 5 2017/07/28 07:08:15;
+            }
+            lease {
+              interface "wlp3s0";
+              fixed-address 192.168.2.74;
+              option subnet-mask 255.255.255.0;
+              option routers 192.168.2.1;
+            }
+        """)
+        expected = [
+            {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
+             'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1',
+             'renew': '4 2017/07/27 18:02:30',
+             'expire': '5 2017/07/28 07:08:15'},
+            {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
+             'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}]
+        write_file(lease_file, content)
+        self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))
+
+
+class TestDHCPDiscoveryClean(CiTestCase):
+    with_logs = True
+
+    @mock.patch('cloudinit.net.dhcp.find_fallback_nic')
+    def test_no_fallback_nic_found(self, m_fallback_nic):
+        """Log and do nothing when nic is absent and no fallback is found."""
+        m_fallback_nic.return_value = None  # No fallback nic found
+        self.assertEqual({}, maybe_perform_dhcp_discovery())
+        self.assertIn(
+            'Skip dhcp_discovery: Unable to find fallback nic.',
+            self.logs.getvalue())
+
+    def test_provided_nic_does_not_exist(self):
+        """When the provided nic doesn't exist, log a message and no-op."""
+        self.assertEqual({}, maybe_perform_dhcp_discovery('idontexist'))
+        self.assertIn(
+            'Skip dhcp_discovery: nic idontexist not found in get_devicelist.',
+            self.logs.getvalue())
+
+    @mock.patch('cloudinit.net.dhcp.util.which')
+    @mock.patch('cloudinit.net.dhcp.find_fallback_nic')
+    def test_absent_dhclient_command(self, m_fallback, m_which):
+        """When dhclient doesn't exist in the OS, log the issue and no-op."""
+        m_fallback.return_value = 'eth9'
+        m_which.return_value = None  # dhclient isn't found
+        self.assertEqual({}, maybe_perform_dhcp_discovery())
+        self.assertIn(
+            'Skip dhclient configuration: No dhclient command found.',
+            self.logs.getvalue())
+
+    @mock.patch('cloudinit.temp_utils.os.getuid')
+    @mock.patch('cloudinit.net.dhcp.dhcp_discovery')
+    @mock.patch('cloudinit.net.dhcp.util.which')
+    @mock.patch('cloudinit.net.dhcp.find_fallback_nic')
+    def test_dhclient_run_with_tmpdir(self, m_fback, m_which, m_dhcp, m_uid):
+        """maybe_perform_dhcp_discovery passes tmpdir to dhcp_discovery."""
+        m_uid.return_value = 0  # Fake root user for tmpdir
+        m_fback.return_value = 'eth9'
+        m_which.return_value = '/sbin/dhclient'
+        m_dhcp.return_value = {'address': '192.168.2.2'}
+        retval = wrap_and_call(
+            'cloudinit.temp_utils',
+            {'_TMPDIR': {'new': None},
+             'os.getuid': 0},
+            maybe_perform_dhcp_discovery)
+        self.assertEqual({'address': '192.168.2.2'}, retval)
+        self.assertEqual(
+            1, m_dhcp.call_count, 'dhcp_discovery not called once')
+        call = m_dhcp.call_args_list[0]
+        self.assertEqual('/sbin/dhclient', call[0][0])
+        self.assertEqual('eth9', call[0][1])
+        self.assertIn('/var/tmp/cloud-init/cloud-init-dhcp-', call[0][2])
+
+    @mock.patch('cloudinit.net.dhcp.util.subp')
+    def test_dhcp_discovery_run_in_sandbox(self, m_subp):
+        """dhcp_discovery brings up the interface and runs dhclient.
+
+        It also returns the parsed dhcp.leases file generated in the sandbox.
+        """
+        tmpdir = self.tmp_dir()
+        dhclient_script = os.path.join(tmpdir, 'dhclient.orig')
+        script_content = '#!/bin/bash\necho fake-dhclient'
+        write_file(dhclient_script, script_content, mode=0o755)
+        lease_content = dedent("""
+            lease {
+              interface "eth9";
+              fixed-address 192.168.2.74;
+              option subnet-mask 255.255.255.0;
+              option routers 192.168.2.1;
+            }
+        """)
+        lease_file = os.path.join(tmpdir, 'dhcp.leases')
+        write_file(lease_file, lease_content)
+        self.assertItemsEqual(
+            [{'interface': 'eth9', 'fixed-address': '192.168.2.74',
+              'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}],
+            dhcp_discovery(dhclient_script, 'eth9', tmpdir))
+        # dhclient script got copied
+        with open(os.path.join(tmpdir, 'dhclient')) as stream:
+            self.assertEqual(script_content, stream.read())
+        # Interface was brought up before dhclient called from sandbox
+        m_subp.assert_has_calls([
+            mock.call(
+                ['ip', 'link', 'set', 'dev', 'eth9', 'up'], capture=True),
+            mock.call(
+                [os.path.join(tmpdir, 'dhclient'), '-1', '-v', '-lf',
+                 lease_file, '-pf', os.path.join(tmpdir, 'dhclient.pid'),
+                 'eth9', '-sf', '/bin/true'], capture=True)])
+
+
+class TestSystemdParseLeases(CiTestCase):
+
+    lxd_lease = dedent("""\
+    # This is private data. Do not parse.
+    ADDRESS=10.75.205.242
+    NETMASK=255.255.255.0
+    ROUTER=10.75.205.1
+    SERVER_ADDRESS=10.75.205.1
+    NEXT_SERVER=10.75.205.1
+    BROADCAST=10.75.205.255
+    T1=1580
+    T2=2930
+    LIFETIME=3600
+    DNS=10.75.205.1
+    DOMAINNAME=lxd
+    HOSTNAME=a1
+    CLIENTID=ffe617693400020000ab110c65a6a0866931c2
+    """)
+
+    lxd_parsed = {
+        'ADDRESS': '10.75.205.242',
+        'NETMASK': '255.255.255.0',
+        'ROUTER': '10.75.205.1',
+        'SERVER_ADDRESS': '10.75.205.1',
+        'NEXT_SERVER': '10.75.205.1',
+        'BROADCAST': '10.75.205.255',
+        'T1': '1580',
+        'T2': '2930',
+        'LIFETIME': '3600',
+        'DNS': '10.75.205.1',
+        'DOMAINNAME': 'lxd',
+        'HOSTNAME': 'a1',
+        'CLIENTID': 'ffe617693400020000ab110c65a6a0866931c2',
+    }
+
+    azure_lease = dedent("""\
+    # This is private data. Do not parse.
+    ADDRESS=10.132.0.5
+    NETMASK=255.255.255.255
+    ROUTER=10.132.0.1
+    SERVER_ADDRESS=169.254.169.254
+    NEXT_SERVER=10.132.0.1
+    MTU=1460
+    T1=43200
+    T2=75600
+    LIFETIME=86400
+    DNS=169.254.169.254
+    NTP=169.254.169.254
+    DOMAINNAME=c.ubuntu-foundations.internal
+    DOMAIN_SEARCH_LIST=c.ubuntu-foundations.internal google.internal
+    HOSTNAME=tribaal-test-171002-1349.c.ubuntu-foundations.internal
+    ROUTES=10.132.0.1/32,0.0.0.0 0.0.0.0/0,10.132.0.1
+    CLIENTID=ff405663a200020000ab11332859494d7a8b4c
+    OPTION_245=624c3620
+    """)
+
+    azure_parsed = {
+        'ADDRESS': '10.132.0.5',
+        'NETMASK': '255.255.255.255',
+        'ROUTER': '10.132.0.1',
+        'SERVER_ADDRESS': '169.254.169.254',
+        'NEXT_SERVER': '10.132.0.1',
+        'MTU': '1460',
+        'T1': '43200',
+        'T2': '75600',
+        'LIFETIME': '86400',
+        'DNS': '169.254.169.254',
+        'NTP': '169.254.169.254',
+        'DOMAINNAME': 'c.ubuntu-foundations.internal',
+        'DOMAIN_SEARCH_LIST': 'c.ubuntu-foundations.internal google.internal',
+        'HOSTNAME': 'tribaal-test-171002-1349.c.ubuntu-foundations.internal',
+        'ROUTES': '10.132.0.1/32,0.0.0.0 0.0.0.0/0,10.132.0.1',
+        'CLIENTID': 'ff405663a200020000ab11332859494d7a8b4c',
+        'OPTION_245': '624c3620'}
+
+    def setUp(self):
+        super(TestSystemdParseLeases, self).setUp()
+        self.lease_d = self.tmp_dir()
+
+    def test_no_leases_returns_empty_dict(self):
+        """A leases dir with no lease files should return empty dictionary."""
+        self.assertEqual({}, networkd_load_leases(self.lease_d))
+
+    def test_no_leases_dir_returns_empty_dict(self):
+        """A non-existing leases dir should return empty dict."""
+        enodir = os.path.join(self.lease_d, 'does-not-exist')
+        self.assertEqual({}, networkd_load_leases(enodir))
+
+    def test_single_leases_file(self):
+        """A leases dir with one leases file."""
+        populate_dir(self.lease_d, {'2': self.lxd_lease})
+        self.assertEqual(
+            {'2': self.lxd_parsed}, networkd_load_leases(self.lease_d))
+
+    def test_single_azure_leases_file(self):
+        """On Azure, option 245 should be present, verify it specifically."""
+        populate_dir(self.lease_d, {'1': self.azure_lease})
+        self.assertEqual(
+            {'1': self.azure_parsed}, networkd_load_leases(self.lease_d))
+
+    def test_multiple_files(self):
+        """Multiple leases files on azure with one found return that value."""
+        self.maxDiff = None
+        populate_dir(self.lease_d, {'1': self.azure_lease,
+                                    '9': self.lxd_lease})
+        self.assertEqual({'1': self.azure_parsed, '9': self.lxd_parsed},
+                         networkd_load_leases(self.lease_d))
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 272a6eb..8cb4114 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -7,7 +7,7 @@ import os
 
 import cloudinit.net as net
 from cloudinit.util import ensure_file, write_file, ProcessExecutionError
-from tests.unittests.helpers import CiTestCase
+from cloudinit.tests.helpers import CiTestCase
 
 
 class TestSysDevPath(CiTestCase):
@@ -414,7 +414,7 @@ class TestEphemeralIPV4Network(CiTestCase):
             self.assertIn('Cannot init network on', str(error))
             self.assertEqual(0, m_subp.call_count)
 
-    def test_ephemeral_ipv4_network_errors_invalid_mask(self, m_subp):
+    def test_ephemeral_ipv4_network_errors_invalid_mask_prefix(self, m_subp):
         """Raise an error when prefix_or_mask is not a netmask or prefix."""
         params = {
             'interface': 'eth0', 'ip': '192.168.2.2',
diff --git a/cloudinit/netinfo.py b/cloudinit/netinfo.py
index 39c79de..8f99d99 100644
--- a/cloudinit/netinfo.py
+++ b/cloudinit/netinfo.py
@@ -13,7 +13,7 @@ import re
 from cloudinit import log as logging
 from cloudinit import util
 
-from prettytable import PrettyTable
+from cloudinit.simpletable import SimpleTable
 
 LOG = logging.getLogger()
 
@@ -170,7 +170,7 @@ def netdev_pformat():
         lines.append(util.center("Net device info failed", '!', 80))
     else:
         fields = ['Device', 'Up', 'Address', 'Mask', 'Scope', 'Hw-Address']
-        tbl = PrettyTable(fields)
+        tbl = SimpleTable(fields)
         for (dev, d) in netdev.items():
             tbl.add_row([dev, d["up"], d["addr"], d["mask"], ".", d["hwaddr"]])
             if d.get('addr6'):
@@ -194,7 +194,7 @@ def route_pformat():
         if routes.get('ipv4'):
             fields_v4 = ['Route', 'Destination', 'Gateway',
                          'Genmask', 'Interface', 'Flags']
-            tbl_v4 = PrettyTable(fields_v4)
+            tbl_v4 = SimpleTable(fields_v4)
             for (n, r) in enumerate(routes.get('ipv4')):
                 route_id = str(n)
                 tbl_v4.add_row([route_id, r['destination'],
@@ -207,7 +207,7 @@ def route_pformat():
         if routes.get('ipv6'):
             fields_v6 = ['Route', 'Proto', 'Recv-Q', 'Send-Q',
                          'Local Address', 'Foreign Address', 'State']
-            tbl_v6 = PrettyTable(fields_v6)
+            tbl_v6 = SimpleTable(fields_v6)
             for (n, r) in enumerate(routes.get('ipv6')):
                 route_id = str(n)
                 tbl_v6.add_row([route_id, r['proto'],
diff --git a/cloudinit/simpletable.py b/cloudinit/simpletable.py
new file mode 100644
index 0000000..9060322
--- /dev/null
+++ b/cloudinit/simpletable.py
@@ -0,0 +1,62 @@
+# Copyright (C) 2017 Amazon.com, Inc. or its affiliates
+#
+# Author: Ethan Faust <efaust@xxxxxxxxxx>
+# Author: Andrew Jorgensen <ajorgens@xxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+
+class SimpleTable(object):
+    """A minimal implementation of PrettyTable
+    for distribution with cloud-init.
+    """
+
+    def __init__(self, fields):
+        self.fields = fields
+        self.rows = []
+
+        # initialize list of 0s the same length
+        # as the number of fields
+        self.column_widths = [0] * len(self.fields)
+        self.update_column_widths(fields)
+
+    def update_column_widths(self, values):
+        for i, value in enumerate(values):
+            self.column_widths[i] = max(
+                len(value),
+                self.column_widths[i])
+
+    def add_row(self, values):
+        if len(values) > len(self.fields):
+            raise TypeError('too many values')
+        values = [str(value) for value in values]
+        self.rows.append(values)
+        self.update_column_widths(values)
+
+    def _hdiv(self):
+        """Returns a horizontal divider for the table."""
+        return '+' + '+'.join(
+            ['-' * (w + 2) for w in self.column_widths]) + '+'
+
+    def _row(self, row):
+        """Returns a formatted row."""
+        return '|' + '|'.join(
+            [col.center(self.column_widths[i] + 2)
+                for i, col in enumerate(row)]) + '|'
+
+    def __str__(self):
+        """Returns a string representation of the table with lines around.
+
+        +-----+-----+
+        | one | two |
+        +-----+-----+
+        |  1  |  2  |
+        |  01 |  10 |
+        +-----+-----+
+        """
+        lines = [self._hdiv(), self._row(self.fields), self._hdiv()]
+        lines += [self._row(r) for r in self.rows] + [self._hdiv()]
+        return '\n'.join(lines)
+
+    def get_string(self):
+        return repr(self)
diff --git a/cloudinit/sources/DataSourceAliYun.py b/cloudinit/sources/DataSourceAliYun.py
index 380e27c..43a7e42 100644
--- a/cloudinit/sources/DataSourceAliYun.py
+++ b/cloudinit/sources/DataSourceAliYun.py
@@ -6,17 +6,20 @@ from cloudinit import sources
 from cloudinit.sources import DataSourceEc2 as EC2
 from cloudinit import util
 
-DEF_MD_VERSION = "2016-01-01"
 ALIYUN_PRODUCT = "Alibaba Cloud ECS"
 
 
 class DataSourceAliYun(EC2.DataSourceEc2):
-    metadata_urls = ["http://100.100.100.200";]
+
+    metadata_urls = ['http://100.100.100.200']
+
+    # The minimum supported metadata_version from the ec2 metadata apis
+    min_metadata_version = '2016-01-01'
+    extended_metadata_versions = []
 
     def __init__(self, sys_cfg, distro, paths):
         super(DataSourceAliYun, self).__init__(sys_cfg, distro, paths)
         self.seed_dir = os.path.join(paths.seed_dir, "AliYun")
-        self.api_ver = DEF_MD_VERSION
 
     def get_hostname(self, fqdn=False, _resolve_ip=False):
         return self.metadata.get('hostname', 'localhost.localdomain')
diff --git a/cloudinit/sources/DataSourceAltCloud.py b/cloudinit/sources/DataSourceAltCloud.py
index ed1d691..c78ad9e 100644
--- a/cloudinit/sources/DataSourceAltCloud.py
+++ b/cloudinit/sources/DataSourceAltCloud.py
@@ -28,8 +28,8 @@ LOG = logging.getLogger(__name__)
 CLOUD_INFO_FILE = '/etc/sysconfig/cloud-info'
 
 # Shell command lists
-CMD_PROBE_FLOPPY = ['/sbin/modprobe', 'floppy']
-CMD_UDEVADM_SETTLE = ['/sbin/udevadm', 'settle', '--timeout=5']
+CMD_PROBE_FLOPPY = ['modprobe', 'floppy']
+CMD_UDEVADM_SETTLE = ['udevadm', 'settle', '--timeout=5']
 
 META_DATA_NOT_SUPPORTED = {
     'block-device-mapping': {},
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index b5a95a1..80c2bd1 100644
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -317,9 +317,13 @@ class DataSourceAzure(sources.DataSource):
                 LOG.debug("ssh authentication: "
                           "using fingerprint from fabirc")
 
-        missing = util.log_time(logfunc=LOG.debug, msg="waiting for files",
+        # wait very long for public SSH keys to arrive
+        # https://bugs.launchpad.net/cloud-init/+bug/1717611
+        missing = util.log_time(logfunc=LOG.debug,
+                                msg="waiting for SSH public key files",
                                 func=wait_for_files,
-                                args=(fp_files,))
+                                args=(fp_files, 900))
+
         if len(missing):
             LOG.warning("Did not find files, but going on: %s", missing)
 
@@ -656,7 +660,7 @@ def pubkeys_from_crt_files(flist):
     return pubkeys
 
 
-def wait_for_files(flist, maxwait=60, naplen=.5, log_pre=""):
+def wait_for_files(flist, maxwait, naplen=.5, log_pre=""):
     need = set(flist)
     waited = 0
     while True:
diff --git a/cloudinit/sources/DataSourceCloudStack.py b/cloudinit/sources/DataSourceCloudStack.py
index 0188d89..9dc473f 100644
--- a/cloudinit/sources/DataSourceCloudStack.py
+++ b/cloudinit/sources/DataSourceCloudStack.py
@@ -19,6 +19,7 @@ import time
 
 from cloudinit import ec2_utils as ec2
 from cloudinit import log as logging
+from cloudinit.net import dhcp
 from cloudinit import sources
 from cloudinit import url_helper as uhelp
 from cloudinit import util
@@ -187,22 +188,36 @@ def get_dhclient_d():
     return None
 
 
-def get_latest_lease():
+def get_latest_lease(lease_d=None):
     # find latest lease file
-    lease_d = get_dhclient_d()
+    if lease_d is None:
+        lease_d = get_dhclient_d()
     if not lease_d:
         return None
     lease_files = os.listdir(lease_d)
     latest_mtime = -1
     latest_file = None
-    for file_name in lease_files:
-        if file_name.startswith("dhclient.") and \
-           (file_name.endswith(".lease") or file_name.endswith(".leases")):
-            abs_path = os.path.join(lease_d, file_name)
-            mtime = os.path.getmtime(abs_path)
-            if mtime > latest_mtime:
-                latest_mtime = mtime
-                latest_file = abs_path
+
+    # lease files are named inconsistently across distros.
+    # We assume that 'dhclient6' indicates ipv6 and ignore it.
+    # ubuntu:
+    #   dhclient.<iface>.leases, dhclient.leases, dhclient6.leases
+    # centos6:
+    #   dhclient-<iface>.leases, dhclient6.leases
+    # centos7: ('--' is not a typo)
+    #   dhclient--<iface>.lease, dhclient6.leases
+    for fname in lease_files:
+        if fname.startswith("dhclient6"):
+            # avoid files that start with dhclient6 assuming dhcpv6.
+            continue
+        if not (fname.endswith(".lease") or fname.endswith(".leases")):
+            continue
+
+        abs_path = os.path.join(lease_d, fname)
+        mtime = os.path.getmtime(abs_path)
+        if mtime > latest_mtime:
+            latest_mtime = mtime
+            latest_file = abs_path
     return latest_file
 
 
@@ -210,20 +225,28 @@ def get_vr_address():
     # Get the address of the virtual router via dhcp leases
     # If no virtual router is detected, fallback on default gateway.
     # See http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/virtual_machines/user-data.html # noqa
+
+    # Try networkd first...
+    latest_address = dhcp.networkd_get_option_from_leases('SERVER_ADDRESS')
+    if latest_address:
+        LOG.debug("Found SERVER_ADDRESS '%s' via networkd_leases",
+                  latest_address)
+        return latest_address
+
+    # Try dhcp lease files next...
     lease_file = get_latest_lease()
     if not lease_file:
         LOG.debug("No lease file found, using default gateway")
         return get_default_gateway()
 
-    latest_address = None
     with open(lease_file, "r") as fd:
         for line in fd:
             if "dhcp-server-identifier" in line:
                 words = line.strip(" ;\r\n").split(" ")
                 if len(words) > 2:
-                    dhcp = words[2]
-                    LOG.debug("Found DHCP identifier %s", dhcp)
-                    latest_address = dhcp
+                    dhcptok = words[2]
+                    LOG.debug("Found DHCP identifier %s", dhcptok)
+                    latest_address = dhcptok
     if not latest_address:
         # No virtual router found, fallback on default gateway
         LOG.debug("No DHCP found, using default gateway")
diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
index 4ec9592..41367a8 100644
--- a/cloudinit/sources/DataSourceEc2.py
+++ b/cloudinit/sources/DataSourceEc2.py
@@ -13,6 +13,8 @@ import time
 
 from cloudinit import ec2_utils as ec2
 from cloudinit import log as logging
+from cloudinit import net
+from cloudinit.net import dhcp
 from cloudinit import sources
 from cloudinit import url_helper as uhelp
 from cloudinit import util
@@ -20,12 +22,13 @@ from cloudinit import warnings
 
 LOG = logging.getLogger(__name__)
 
-# Which version we are requesting of the ec2 metadata apis
-DEF_MD_VERSION = '2009-04-04'
+SKIP_METADATA_URL_CODES = frozenset([uhelp.NOT_FOUND])
 
 STRICT_ID_PATH = ("datasource", "Ec2", "strict_id")
 STRICT_ID_DEFAULT = "warn"
 
+_unset = "_unset"
+
 
 class Platforms(object):
     ALIYUN = "AliYun"
@@ -41,17 +44,30 @@ class Platforms(object):
 
 
 class DataSourceEc2(sources.DataSource):
+
     # Default metadata urls that will be used if none are provided
     # They will be checked for 'resolveability' and some of the
     # following may be discarded if they do not resolve
     metadata_urls = ["http://169.254.169.254";, "http://instance-data.:8773";]
+
+    # The minimum supported metadata_version from the ec2 metadata apis
+    min_metadata_version = '2009-04-04'
+
+    # Priority ordered list of additional metadata versions which will be tried
+    # for extended metadata content. IPv6 support comes in 2016-09-02
+    extended_metadata_versions = ['2016-09-02']
+
     _cloud_platform = None
 
+    _network_config = _unset  # Used for caching calculated network config v1
+
+    # Whether we want to get network configuration from the metadata service.
+    get_network_metadata = False
+
     def __init__(self, sys_cfg, distro, paths):
         sources.DataSource.__init__(self, sys_cfg, distro, paths)
         self.metadata_address = None
         self.seed_dir = os.path.join(paths.seed_dir, "ec2")
-        self.api_ver = DEF_MD_VERSION
 
     def get_data(self):
         seed_ret = {}
@@ -73,21 +89,27 @@ class DataSourceEc2(sources.DataSource):
         elif self.cloud_platform == Platforms.NO_EC2_METADATA:
             return False
 
-        try:
-            if not self.wait_for_metadata_service():
+        if self.get_network_metadata:  # Setup networking in init-local stage.
+            if util.is_FreeBSD():
+                LOG.debug("FreeBSD doesn't support running dhclient with -sf")
                 return False
-            start_time = time.time()
-            self.userdata_raw = \
-                ec2.get_instance_userdata(self.api_ver, self.metadata_address)
-            self.metadata = ec2.get_instance_metadata(self.api_ver,
-                                                      self.metadata_address)
-            LOG.debug("Crawl of metadata service took %.3f seconds",
-                      time.time() - start_time)
-            return True
-        except Exception:
-            util.logexc(LOG, "Failed reading from metadata address %s",
-                        self.metadata_address)
-            return False
+            dhcp_leases = dhcp.maybe_perform_dhcp_discovery()
+            if not dhcp_leases:
+                # DataSourceEc2Local failed in init-local stage. DataSourceEc2
+                # will still run in init-network stage.
+                return False
+            dhcp_opts = dhcp_leases[-1]
+            net_params = {'interface': dhcp_opts.get('interface'),
+                          'ip': dhcp_opts.get('fixed-address'),
+                          'prefix_or_mask': dhcp_opts.get('subnet-mask'),
+                          'broadcast': dhcp_opts.get('broadcast-address'),
+                          'router': dhcp_opts.get('routers')}
+            with net.EphemeralIPv4Network(**net_params):
+                return util.log_time(
+                    logfunc=LOG.debug, msg='Crawl of metadata service',
+                    func=self._crawl_metadata)
+        else:
+            return self._crawl_metadata()
 
     @property
     def launch_index(self):
@@ -95,6 +117,32 @@ class DataSourceEc2(sources.DataSource):
             return None
         return self.metadata.get('ami-launch-index')
 
+    def get_metadata_api_version(self):
+        """Get the best supported api version from the metadata service.
+
+        Loop through all extended support metadata versions in order and
+        return the most-fully featured metadata api version discovered.
+
+        If extended_metadata_versions aren't present, return the datasource's
+        min_metadata_version.
+        """
+        # Assumes metadata service is already up
+        for api_ver in self.extended_metadata_versions:
+            url = '{0}/{1}/meta-data/instance-id'.format(
+                self.metadata_address, api_ver)
+            try:
+                resp = uhelp.readurl(url=url)
+            except uhelp.UrlError as e:
+                LOG.debug('url %s raised exception %s', url, e)
+            else:
+                if resp.code == 200:
+                    LOG.debug('Found preferred metadata version %s', api_ver)
+                    return api_ver
+                elif resp.code == 404:
+                    msg = 'Metadata api version %s not present. Headers: %s'
+                    LOG.debug(msg, api_ver, resp.headers)
+        return self.min_metadata_version
+
     def get_instance_id(self):
         return self.metadata['instance-id']
 
@@ -138,21 +186,22 @@ class DataSourceEc2(sources.DataSource):
         urls = []
         url2base = {}
         for url in mdurls:
-            cur = "%s/%s/meta-data/instance-id" % (url, self.api_ver)
+            cur = '{0}/{1}/meta-data/instance-id'.format(
+                url, self.min_metadata_version)
             urls.append(cur)
             url2base[cur] = url
 
         start_time = time.time()
-        url = uhelp.wait_for_url(urls=urls, max_wait=max_wait,
-                                 timeout=timeout, status_cb=LOG.warn)
+        url = uhelp.wait_for_url(
+            urls=urls, max_wait=max_wait, timeout=timeout, status_cb=LOG.warn)
 
         if url:
-            LOG.debug("Using metadata source: '%s'", url2base[url])
+            self.metadata_address = url2base[url]
+            LOG.debug("Using metadata source: '%s'", self.metadata_address)
         else:
             LOG.critical("Giving up on md from %s after %s seconds",
                          urls, int(time.time() - start_time))
 
-        self.metadata_address = url2base.get(url)
         return bool(url)
 
     def device_name_to_device(self, name):
@@ -234,6 +283,68 @@ class DataSourceEc2(sources.DataSource):
                 util.get_cfg_by_path(cfg, STRICT_ID_PATH, STRICT_ID_DEFAULT),
                 cfg)
 
+    @property
+    def network_config(self):
+        """Return a network config dict for rendering ENI or netplan files."""
+        if self._network_config != _unset:
+            return self._network_config
+
+        if self.metadata is None:
+            # this would happen if get_data hadn't been called. leave as _unset
+            LOG.warning(
+                "Unexpected call to network_config when metadata is None.")
+            return None
+
+        result = None
+        net_md = self.metadata.get('network')
+        if isinstance(net_md, dict):
+            result = convert_ec2_metadata_network_config(net_md)
+        else:
+            LOG.warning("unexpected metadata 'network' key not valid: %s",
+                        net_md)
+        self._network_config = result
+
+        return self._network_config
+
+    def _crawl_metadata(self):
+        """Crawl metadata service when available.
+
+        @returns: True on success, False otherwise.
+        """
+        if not self.wait_for_metadata_service():
+            return False
+        api_version = self.get_metadata_api_version()
+        try:
+            self.userdata_raw = ec2.get_instance_userdata(
+                api_version, self.metadata_address)
+            self.metadata = ec2.get_instance_metadata(
+                api_version, self.metadata_address)
+        except Exception:
+            util.logexc(
+                LOG, "Failed reading from metadata address %s",
+                self.metadata_address)
+            return False
+        return True
+
+
+class DataSourceEc2Local(DataSourceEc2):
+    """Datasource run at init-local which sets up network to query metadata.
+
+    In init-local, no network is available. This subclass sets up minimal
+    networking with dhclient on a viable nic so that it can talk to the
+    metadata service. If the metadata service provides network configuration
+    then render the network configuration for that instance based on metadata.
+    """
+    get_network_metadata = True  # Get metadata network config if present
+
+    def get_data(self):
+        supported_platforms = (Platforms.AWS,)
+        if self.cloud_platform not in supported_platforms:
+            LOG.debug("Local Ec2 mode only supported on %s, not %s",
+                      supported_platforms, self.cloud_platform)
+            return False
+        return super(DataSourceEc2Local, self).get_data()
+
 
 def read_strict_mode(cfgval, default):
     try:
@@ -347,8 +458,39 @@ def _collect_platform_data():
     return data
 
 
+def convert_ec2_metadata_network_config(network_md, macs_to_nics=None):
+    """Convert ec2 metadata to network config version 1 data dict.
+
+    @param: network_md: 'network' portion of EC2 metadata.
+       generally formed as {"interfaces": {"macs": {}} where
+       'macs' is a dictionary with mac address as key and contents like:
+       {"device-number": "0", "interface-id": "...", "local-ipv4s": ...}
+    @param: macs_to_name: Optional dict mac addresses and the nic name. If
+       not provided, get_interfaces_by_mac is called to get it from the OS.
+
+    @return A dict of network config version 1 based on the metadata and macs.
+    """
+    netcfg = {'version': 1, 'config': []}
+    if not macs_to_nics:
+        macs_to_nics = net.get_interfaces_by_mac()
+    macs_metadata = network_md['interfaces']['macs']
+    for mac, nic_name in macs_to_nics.items():
+        nic_metadata = macs_metadata.get(mac)
+        if not nic_metadata:
+            continue  # Not a physical nic represented in metadata
+        nic_cfg = {'type': 'physical', 'name': nic_name, 'subnets': []}
+        nic_cfg['mac_address'] = mac
+        if nic_metadata.get('public-ipv4s'):
+            nic_cfg['subnets'].append({'type': 'dhcp4'})
+        if nic_metadata.get('ipv6s'):
+            nic_cfg['subnets'].append({'type': 'dhcp6'})
+        netcfg['config'].append(nic_cfg)
+    return netcfg
+
+
 # Used to match classes to dependencies
 datasources = [
+    (DataSourceEc2Local, (sources.DEP_FILESYSTEM,)),  # Run at init-local
     (DataSourceEc2, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
 ]
 
diff --git a/cloudinit/sources/DataSourceGCE.py b/cloudinit/sources/DataSourceGCE.py
index 684eac8..ccae420 100644
--- a/cloudinit/sources/DataSourceGCE.py
+++ b/cloudinit/sources/DataSourceGCE.py
@@ -11,9 +11,8 @@ from cloudinit import util
 
 LOG = logging.getLogger(__name__)
 
-BUILTIN_DS_CONFIG = {
-    'metadata_url': 'http://metadata.google.internal/computeMetadata/v1/'
-}
+MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/'
+BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL}
 REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname')
 
 
@@ -51,75 +50,20 @@ class DataSourceGCE(sources.DataSource):
             BUILTIN_DS_CONFIG])
         self.metadata_address = self.ds_cfg['metadata_url']
 
-    # GCE takes sshKeys attribute in the format of '<user>:<public_key>'
-    # so we have to trim each key to remove the username part
-    def _trim_key(self, public_key):
-        try:
-            index = public_key.index(':')
-            if index > 0:
-                return public_key[(index + 1):]
-        except Exception:
-            return public_key
-
     def get_data(self):
-        if not platform_reports_gce():
-            return False
-
-        # url_map: (our-key, path, required, is_text)
-        url_map = [
-            ('instance-id', ('instance/id',), True, True),
-            ('availability-zone', ('instance/zone',), True, True),
-            ('local-hostname', ('instance/hostname',), True, True),
-            ('public-keys', ('project/attributes/sshKeys',
-                             'instance/attributes/ssh-keys'), False, True),
-            ('user-data', ('instance/attributes/user-data',), False, False),
-            ('user-data-encoding', ('instance/attributes/user-data-encoding',),
-             False, True),
-        ]
-
-        # if we cannot resolve the metadata server, then no point in trying
-        if not util.is_resolvable_url(self.metadata_address):
-            LOG.debug("%s is not resolvable", self.metadata_address)
-            return False
+        ret = util.log_time(
+            LOG.debug, 'Crawl of GCE metadata service',
+            read_md, kwargs={'address': self.metadata_address})
 
-        metadata_fetcher = GoogleMetadataFetcher(self.metadata_address)
-        # iterate over url_map keys to get metadata items
-        running_on_gce = False
-        for (mkey, paths, required, is_text) in url_map:
-            value = None
-            for path in paths:
-                new_value = metadata_fetcher.get_value(path, is_text)
-                if new_value is not None:
-                    value = new_value
-            if value:
-                running_on_gce = True
-            if required and value is None:
-                msg = "required key %s returned nothing. not GCE"
-                if not running_on_gce:
-                    LOG.debug(msg, mkey)
-                else:
-                    LOG.warning(msg, mkey)
-                return False
-            self.metadata[mkey] = value
-
-        if self.metadata['public-keys']:
-            lines = self.metadata['public-keys'].splitlines()
-            self.metadata['public-keys'] = [self._trim_key(k) for k in lines]
-
-        if self.metadata['availability-zone']:
-            self.metadata['availability-zone'] = self.metadata[
-                'availability-zone'].split('/')[-1]
-
-        encoding = self.metadata.get('user-data-encoding')
-        if encoding:
-            if encoding == 'base64':
-                self.metadata['user-data'] = b64decode(
-                    self.metadata['user-data'])
+        if not ret['success']:
+            if ret['platform_reports_gce']:
+                LOG.warning(ret['reason'])
             else:
-                LOG.warning('unknown user-data-encoding: %s, ignoring',
-                            encoding)
-
-        return running_on_gce
+                LOG.debug(ret['reason'])
+            return False
+        self.metadata = ret['meta-data']
+        self.userdata_raw = ret['user-data']
+        return True
 
     @property
     def launch_index(self):
@@ -136,9 +80,6 @@ class DataSourceGCE(sources.DataSource):
         # GCE has long FDQN's and has asked for short hostnames
         return self.metadata['local-hostname'].split('.')[0]
 
-    def get_userdata_raw(self):
-        return self.metadata['user-data']
-
     @property
     def availability_zone(self):
         return self.metadata['availability-zone']
@@ -148,6 +89,87 @@ class DataSourceGCE(sources.DataSource):
         return self.availability_zone.rsplit('-', 1)[0]
 
 
+def _trim_key(public_key):
+    # GCE takes sshKeys attribute in the format of '<user>:<public_key>'
+    # so we have to trim each key to remove the username part
+    try:
+        index = public_key.index(':')
+        if index > 0:
+            return public_key[(index + 1):]
+    except Exception:
+        return public_key
+
+
+def read_md(address=None, platform_check=True):
+
+    if address is None:
+        address = MD_V1_URL
+
+    ret = {'meta-data': None, 'user-data': None,
+           'success': False, 'reason': None}
+    ret['platform_reports_gce'] = platform_reports_gce()
+
+    if platform_check and not ret['platform_reports_gce']:
+        ret['reason'] = "Not running on GCE."
+        return ret
+
+    # if we cannot resolve the metadata server, then no point in trying
+    if not util.is_resolvable_url(address):
+        LOG.debug("%s is not resolvable", address)
+        ret['reason'] = 'address "%s" is not resolvable' % address
+        return ret
+
+    # url_map: (our-key, path, required, is_text)
+    url_map = [
+        ('instance-id', ('instance/id',), True, True),
+        ('availability-zone', ('instance/zone',), True, True),
+        ('local-hostname', ('instance/hostname',), True, True),
+        ('public-keys', ('project/attributes/sshKeys',
+                         'instance/attributes/ssh-keys'), False, True),
+        ('user-data', ('instance/attributes/user-data',), False, False),
+        ('user-data-encoding', ('instance/attributes/user-data-encoding',),
+         False, True),
+    ]
+
+    metadata_fetcher = GoogleMetadataFetcher(address)
+    md = {}
+    # iterate over url_map keys to get metadata items
+    for (mkey, paths, required, is_text) in url_map:
+        value = None
+        for path in paths:
+            new_value = metadata_fetcher.get_value(path, is_text)
+            if new_value is not None:
+                value = new_value
+        if required and value is None:
+            msg = "required key %s returned nothing. not GCE"
+            ret['reason'] = msg % mkey
+            return ret
+        md[mkey] = value
+
+    if md['public-keys']:
+        lines = md['public-keys'].splitlines()
+        md['public-keys'] = [_trim_key(k) for k in lines]
+
+    if md['availability-zone']:
+        md['availability-zone'] = md['availability-zone'].split('/')[-1]
+
+    encoding = md.get('user-data-encoding')
+    if encoding:
+        if encoding == 'base64':
+            md['user-data'] = b64decode(md['user-data'])
+        else:
+            LOG.warning('unknown user-data-encoding: %s, ignoring', encoding)
+
+    if 'user-data' in md:
+        ret['user-data'] = md['user-data']
+        del md['user-data']
+
+    ret['meta-data'] = md
+    ret['success'] = True
+
+    return ret
+
+
 def platform_reports_gce():
     pname = util.read_dmi_data('system-product-name') or "N/A"
     if pname == "Google Compute Engine":
@@ -173,4 +195,36 @@ datasources = [
 def get_datasource_list(depends):
     return sources.list_from_depends(depends, datasources)
 
+
+if __name__ == "__main__":
+    import argparse
+    import json
+    import sys
+
+    from base64 import b64encode
+
+    parser = argparse.ArgumentParser(description='Query GCE Metadata Service')
+    parser.add_argument("--endpoint", metavar="URL",
+                        help="The url of the metadata service.",
+                        default=MD_V1_URL)
+    parser.add_argument("--no-platform-check", dest="platform_check",
+                        help="Ignore smbios platform check",
+                        action='store_false', default=True)
+    args = parser.parse_args()
+    data = read_md(address=args.endpoint, platform_check=args.platform_check)
+    if 'user-data' in data:
+        # user-data is bytes not string like other things. Handle it specially.
+        # if it can be represented as utf-8 then do so.  Otherwise print base64
+        # encoded value in the key user-data-b64.
+        try:
+            data['user-data'] = data['user-data'].decode()
+        except UnicodeDecodeError:
+            sys.stderr.write("User-data cannot be decoded. "
+                             "Writing as base64\n")
+            del data['user-data']
+            # b64encode returns a bytes value. decode to get the string.
+            data['user-data-b64'] = b64encode(data['user-data']).decode()
+
+    print(json.dumps(data, indent=1, sort_keys=True, separators=(',', ': ')))
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
index f20c9a6..ccebf11 100644
--- a/cloudinit/sources/DataSourceOVF.py
+++ b/cloudinit/sources/DataSourceOVF.py
@@ -25,6 +25,8 @@ from cloudinit.sources.helpers.vmware.imc.config_file \
     import ConfigFile
 from cloudinit.sources.helpers.vmware.imc.config_nic \
     import NicConfigurator
+from cloudinit.sources.helpers.vmware.imc.config_passwd \
+    import PasswordConfigurator
 from cloudinit.sources.helpers.vmware.imc.guestcust_error \
     import GuestCustErrorEnum
 from cloudinit.sources.helpers.vmware.imc.guestcust_event \
@@ -49,6 +51,10 @@ class DataSourceOVF(sources.DataSource):
         self.cfg = {}
         self.supported_seed_starts = ("/", "file://")
         self.vmware_customization_supported = True
+        self._network_config = None
+        self._vmware_nics_to_enable = None
+        self._vmware_cust_conf = None
+        self._vmware_cust_found = False
 
     def __str__(self):
         root = sources.DataSource.__str__(self)
@@ -58,8 +64,8 @@ class DataSourceOVF(sources.DataSource):
         found = []
         md = {}
         ud = ""
-        vmwarePlatformFound = False
-        vmwareImcConfigFilePath = ''
+        vmwareImcConfigFilePath = None
+        nicspath = None
 
         defaults = {
             "instance-id": "iid-dsovf",
@@ -99,53 +105,88 @@ class DataSourceOVF(sources.DataSource):
                         logfunc=LOG.debug,
                         msg="waiting for configuration file",
                         func=wait_for_imc_cfg_file,
-                        args=("/var/run/vmware-imc", "cust.cfg", max_wait))
+                        args=("cust.cfg", max_wait))
 
                 if vmwareImcConfigFilePath:
                     LOG.debug("Found VMware Customization Config File at %s",
                               vmwareImcConfigFilePath)
+                    nicspath = wait_for_imc_cfg_file(
+                        filename="nics.txt", maxwait=10, naplen=5)
                 else:
                     LOG.debug("Did not find VMware Customization Config File")
             else:
                 LOG.debug("Customization for VMware platform is disabled.")
 
         if vmwareImcConfigFilePath:
-            nics = ""
+            self._vmware_nics_to_enable = ""
             try:
                 cf = ConfigFile(vmwareImcConfigFilePath)
-                conf = Config(cf)
-                (md, ud, cfg) = read_vmware_imc(conf)
-                dirpath = os.path.dirname(vmwareImcConfigFilePath)
-                nics = get_nics_to_enable(dirpath)
+                self._vmware_cust_conf = Config(cf)
+                (md, ud, cfg) = read_vmware_imc(self._vmware_cust_conf)
+                self._vmware_nics_to_enable = get_nics_to_enable(nicspath)
+                markerid = self._vmware_cust_conf.marker_id
+                markerexists = check_marker_exists(markerid)
             except Exception as e:
                 LOG.debug("Error parsing the customization Config File")
                 LOG.exception(e)
                 set_customization_status(
                     GuestCustStateEnum.GUESTCUST_STATE_RUNNING,
                     GuestCustEventEnum.GUESTCUST_EVENT_CUSTOMIZE_FAILED)
-                enable_nics(nics)
-                return False
+                raise e
             finally:
                 util.del_dir(os.path.dirname(vmwareImcConfigFilePath))
-
             try:
-                LOG.debug("Applying the Network customization")
-                nicConfigurator = NicConfigurator(conf.nics)
-                nicConfigurator.configure()
+                LOG.debug("Preparing the Network configuration")
+                self._network_config = get_network_config_from_conf(
+                    self._vmware_cust_conf,
+                    True,
+                    True,
+                    self.distro.osfamily)
             except Exception as e:
-                LOG.debug("Error applying the Network Configuration")
                 LOG.exception(e)
                 set_customization_status(
                     GuestCustStateEnum.GUESTCUST_STATE_RUNNING,
                     GuestCustEventEnum.GUESTCUST_EVENT_NETWORK_SETUP_FAILED)
-                enable_nics(nics)
-                return False
-
-            vmwarePlatformFound = True
+                raise e
+
+            if markerid and not markerexists:
+                LOG.debug("Applying password customization")
+                pwdConfigurator = PasswordConfigurator()
+                adminpwd = self._vmware_cust_conf.admin_password
+                try:
+                    resetpwd = self._vmware_cust_conf.reset_password
+                    if adminpwd or resetpwd:
+                        pwdConfigurator.configure(adminpwd, resetpwd,
+                                                  self.distro)
+                    else:
+                        LOG.debug("Changing password is not needed")
+                except Exception as e:
+                    LOG.debug("Error applying Password Configuration: %s", e)
+                    set_customization_status(
+                        GuestCustStateEnum.GUESTCUST_STATE_RUNNING,
+                        GuestCustEventEnum.GUESTCUST_EVENT_CUSTOMIZE_FAILED)
+                    return False
+            if markerid:
+                LOG.debug("Handle marker creation")
+                try:
+                    setup_marker_files(markerid)
+                except Exception as e:
+                    LOG.debug("Error creating marker files: %s", e)
+                    set_customization_status(
+                        GuestCustStateEnum.GUESTCUST_STATE_RUNNING,
+                        GuestCustEventEnum.GUESTCUST_EVENT_CUSTOMIZE_FAILED)
+                    return False
+
+            self._vmware_cust_found = True
+            found.append('vmware-tools')
+
+            # TODO: Need to set the status to DONE only when the
+            # customization is done successfully.
+            enable_nics(self._vmware_nics_to_enable)
             set_customization_status(
                 GuestCustStateEnum.GUESTCUST_STATE_DONE,
                 GuestCustErrorEnum.GUESTCUST_ERROR_SUCCESS)
-            enable_nics(nics)
+
         else:
             np = {'iso': transport_iso9660,
                   'vmware-guestd': transport_vmware_guestd, }
@@ -160,7 +201,7 @@ class DataSourceOVF(sources.DataSource):
                 found.append(name)
 
         # There was no OVF transports found
-        if len(found) == 0 and not vmwarePlatformFound:
+        if len(found) == 0:
             return False
 
         if 'seedfrom' in md and md['seedfrom']:
@@ -205,6 +246,10 @@ class DataSourceOVF(sources.DataSource):
     def get_config_obj(self):
         return self.cfg
 
+    @property
+    def network_config(self):
+        return self._network_config
+
 
 class DataSourceOVFNet(DataSourceOVF):
     def __init__(self, sys_cfg, distro, paths):
@@ -236,12 +281,13 @@ def get_max_wait_from_cfg(cfg):
     return max_wait
 
 
-def wait_for_imc_cfg_file(dirpath, filename, maxwait=180, naplen=5):
+def wait_for_imc_cfg_file(filename, maxwait=180, naplen=5,
+                          dirpath="/var/run/vmware-imc"):
     waited = 0
 
     while waited < maxwait:
-        fileFullPath = search_file(dirpath, filename)
-        if fileFullPath:
+        fileFullPath = os.path.join(dirpath, filename)
+        if os.path.isfile(fileFullPath):
             return fileFullPath
         LOG.debug("Waiting for VMware Customization Config File")
         time.sleep(naplen)
@@ -249,6 +295,26 @@ def wait_for_imc_cfg_file(dirpath, filename, maxwait=180, naplen=5):
     return None
 
 
+def get_network_config_from_conf(config, use_system_devices=True,
+                                 configure=False, osfamily=None):
+    nicConfigurator = NicConfigurator(config.nics, use_system_devices)
+    nics_cfg_list = nicConfigurator.generate(configure, osfamily)
+
+    return get_network_config(nics_cfg_list,
+                              config.name_servers,
+                              config.dns_suffixes)
+
+
+def get_network_config(nics=None, nameservers=None, search=None):
+    config_list = nics
+
+    if nameservers or search:
+        config_list.append({'type': 'nameserver', 'address': nameservers,
+                            'search': search})
+
+    return {'version': 1, 'config': config_list}
+
+
 # This will return a dict with some content
 #  meta-data, user-data, some config
 def read_vmware_imc(config):
@@ -264,6 +330,9 @@ def read_vmware_imc(config):
     if config.timezone:
         cfg['timezone'] = config.timezone
 
+    # Generate a unique instance-id so that re-customization will
+    # happen in cloud-init
+    md['instance-id'] = "iid-vmware-" + util.rand_str(strlen=8)
     return (md, ud, cfg)
 
 
@@ -306,26 +375,56 @@ def get_ovf_env(dirname):
     return (None, False)
 
 
-# Transport functions take no input and return
-# a 3 tuple of content, path, filename
-def transport_iso9660(require_iso=True):
+def maybe_cdrom_device(devname):
+    """Test if devname matches known list of devices which may contain iso9660
+       filesystems.
 
-    # default_regex matches values in
-    # /lib/udev/rules.d/60-cdrom_id.rules
-    # KERNEL!="sr[0-9]*|hd[a-z]|xvd*", GOTO="cdrom_end"
-    envname = "CLOUD_INIT_CDROM_DEV_REGEX"
-    default_regex = "^(sr[0-9]+|hd[a-z]|xvd.*)"
+    Be helpful in accepting either knames (with no leading /dev/) or full path
+    names, but do not allow paths outside of /dev/, like /dev/foo/bar/xxx.
+    """
+    if not devname:
+        return False
+    elif not isinstance(devname, util.string_types):
+        raise ValueError("Unexpected input for devname: %s" % devname)
+
+    # resolve '..' and multi '/' elements
+    devname = os.path.normpath(devname)
+
+    # drop leading '/dev/'
+    if devname.startswith("/dev/"):
+        # partition returns tuple (before, partition, after)
+        devname = devname.partition("/dev/")[-1]
 
-    devname_regex = os.environ.get(envname, default_regex)
+    # ignore leading slash (/sr0), else fail on / in name (foo/bar/xvdc)
+    if devname.startswith("/"):
+        devname = devname.split("/")[-1]
+    elif devname.count("/") > 0:
+        return False
+
+    # if empty string
+    if not devname:
+        return False
+
+    # default_regex matches values in /lib/udev/rules.d/60-cdrom_id.rules
+    # KERNEL!="sr[0-9]*|hd[a-z]|xvd*", GOTO="cdrom_end"
+    default_regex = r"^(sr[0-9]+|hd[a-z]|xvd.*)"
+    devname_regex = os.environ.get("CLOUD_INIT_CDROM_DEV_REGEX", default_regex)
     cdmatch = re.compile(devname_regex)
 
+    return cdmatch.match(devname) is not None
+
+
+# Transport functions take no input and return
+# a 3 tuple of content, path, filename
+def transport_iso9660(require_iso=True):
+
     # Go through mounts to see if it was already mounted
     mounts = util.mounts()
     for (dev, info) in mounts.items():
         fstype = info['fstype']
         if fstype != "iso9660" and require_iso:
             continue
-        if cdmatch.match(dev[5:]) is None:  # take off '/dev/'
+        if not maybe_cdrom_device(dev):
             continue
         mp = info['mountpoint']
         (fname, contents) = get_ovf_env(mp)
@@ -337,29 +436,19 @@ def transport_iso9660(require_iso=True):
     else:
         mtype = None
 
-    devs = os.listdir("/dev/")
-    devs.sort()
+    # generate a list of devices with mtype filesystem, filter by regex
+    devs = [dev for dev in
+            util.find_devs_with("TYPE=%s" % mtype if mtype else None)
+            if maybe_cdrom_device(dev)]
     for dev in devs:
-        fullp = os.path.join("/dev/", dev)
-
-        if (fullp in mounts or
-                not cdmatch.match(dev) or os.path.isdir(fullp)):
-            continue
-
-        try:
-            # See if we can read anything at all...??
-            util.peek_file(fullp, 512)
-        except IOError:
-            continue
-
         try:
-            (fname, contents) = util.mount_cb(fullp, get_ovf_env, mtype=mtype)
+            (fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype)
         except util.MountFailedError:
-            LOG.debug("%s not mountable as iso9660", fullp)
+            LOG.debug("%s not mountable as iso9660", dev)
             continue
 
         if contents is not False:
-            return (contents, fullp, fname)
+            return (contents, dev, fname)
 
     return (False, None, None)
 
@@ -445,4 +534,33 @@ datasources = (
 def get_datasource_list(depends):
     return sources.list_from_depends(depends, datasources)
 
+
+# To check if marker file exists
+def check_marker_exists(markerid):
+    """
+    Check the existence of a marker file.
+    Presence of marker file determines whether a certain code path is to be
+    executed. It is needed for partial guest customization in VMware.
+    """
+    if not markerid:
+        return False
+    markerfile = "/.markerfile-" + markerid
+    if os.path.exists(markerfile):
+        return True
+    return False
+
+
+# Create a marker file
+def setup_marker_files(markerid):
+    """
+    Create a new marker file.
+    Marker files are unique to a full customization workflow in VMware
+    environment.
+    """
+    if not markerid:
+        return
+    markerfile = "/.markerfile-" + markerid
+    util.del_file("/.markerfile-*.txt")
+    open(markerfile, 'w').close()
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index 952caf3..9a43fbe 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -44,6 +44,7 @@ class DataSourceNotFoundException(Exception):
 class DataSource(object):
 
     dsmode = DSMODE_NETWORK
+    default_locale = 'en_US.UTF-8'
 
     def __init__(self, sys_cfg, distro, paths, ud_proc=None):
         self.sys_cfg = sys_cfg
@@ -150,7 +151,13 @@ class DataSource(object):
         return None
 
     def get_locale(self):
-        return 'en_US.UTF-8'
+        """Default locale is en_US.UTF-8, but allow distros to override"""
+        locale = self.default_locale
+        try:
+            locale = self.distro.get_locale()
+        except NotImplementedError:
+            pass
+        return locale
 
     @property
     def availability_zone(self):
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index e22409d..959b1bd 100644
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -6,16 +6,16 @@ import os
 import re
 import socket
 import struct
-import tempfile
 import time
 
+from cloudinit.net import dhcp
 from cloudinit import stages
+from cloudinit import temp_utils
 from contextlib import contextmanager
 from xml.etree import ElementTree
 
 from cloudinit import util
 
-
 LOG = logging.getLogger(__name__)
 
 
@@ -111,7 +111,7 @@ class OpenSSLManager(object):
     }
 
     def __init__(self):
-        self.tmpdir = tempfile.mkdtemp()
+        self.tmpdir = temp_utils.mkdtemp()
         self.certificate = None
         self.generate_certificate()
 
@@ -239,6 +239,11 @@ class WALinuxAgentShim(object):
         return socket.inet_ntoa(packed_bytes)
 
     @staticmethod
+    def _networkd_get_value_from_leases(leases_d=None):
+        return dhcp.networkd_get_option_from_leases(
+            'OPTION_245', leases_d=leases_d)
+
+    @staticmethod
     def _get_value_from_leases_file(fallback_lease_file):
         leases = []
         content = util.load_file(fallback_lease_file)
@@ -287,12 +292,15 @@ class WALinuxAgentShim(object):
 
     @staticmethod
     def find_endpoint(fallback_lease_file=None):
-        LOG.debug('Finding Azure endpoint...')
         value = None
-        # Option-245 stored in /run/cloud-init/dhclient.hooks/<ifc>.json
-        # a dhclient exit hook that calls cloud-init-dhclient-hook
-        dhcp_options = WALinuxAgentShim._load_dhclient_json()
-        value = WALinuxAgentShim._get_value_from_dhcpoptions(dhcp_options)
+        LOG.debug('Finding Azure endpoint from networkd...')
+        value = WALinuxAgentShim._networkd_get_value_from_leases()
+        if value is None:
+            # Option-245 stored in /run/cloud-init/dhclient.hooks/<ifc>.json
+            # a dhclient exit hook that calls cloud-init-dhclient-hook
+            LOG.debug('Finding Azure endpoint from hook json...')
+            dhcp_options = WALinuxAgentShim._load_dhclient_json()
+            value = WALinuxAgentShim._get_value_from_dhcpoptions(dhcp_options)
         if value is None:
             # Fallback and check the leases file if unsuccessful
             LOG.debug("Unable to find endpoint in dhclient logs. "
diff --git a/cloudinit/sources/helpers/vmware/imc/config.py b/cloudinit/sources/helpers/vmware/imc/config.py
index 9a5e3a8..49d441d 100644
--- a/cloudinit/sources/helpers/vmware/imc/config.py
+++ b/cloudinit/sources/helpers/vmware/imc/config.py
@@ -5,6 +5,7 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
+
 from .nic import Nic
 
 
@@ -14,13 +15,16 @@ class Config(object):
     Specification file.
     """
 
+    CUSTOM_SCRIPT = 'CUSTOM-SCRIPT|SCRIPT-NAME'
     DNS = 'DNS|NAMESERVER|'
-    SUFFIX = 'DNS|SUFFIX|'
+    DOMAINNAME = 'NETWORK|DOMAINNAME'
+    HOSTNAME = 'NETWORK|HOSTNAME'
+    MARKERID = 'MISC|MARKER-ID'
     PASS = 'PASSWORD|-PASS'
+    RESETPASS = 'PASSWORD|RESET'
+    SUFFIX = 'DNS|SUFFIX|'
     TIMEZONE = 'DATETIME|TIMEZONE'
     UTC = 'DATETIME|UTC'
-    HOSTNAME = 'NETWORK|HOSTNAME'
-    DOMAINNAME = 'NETWORK|DOMAINNAME'
 
     def __init__(self, configFile):
         self._configFile = configFile
@@ -82,4 +86,18 @@ class Config(object):
 
         return res
 
+    @property
+    def reset_password(self):
+        """Retreives if the root password needs to be reset."""
+        resetPass = self._configFile.get(Config.RESETPASS, 'no')
+        resetPass = resetPass.lower()
+        if resetPass not in ('yes', 'no'):
+            raise ValueError('ResetPassword value should be yes/no')
+        return resetPass == 'yes'
+
+    @property
+    def marker_id(self):
+        """Returns marker id."""
+        return self._configFile.get(Config.MARKERID, None)
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py
index 67ac21d..2fb07c5 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_nic.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py
@@ -9,22 +9,48 @@ import logging
 import os
 import re
 
+from cloudinit.net.network_state import mask_to_net_prefix
 from cloudinit import util
 
 logger = logging.getLogger(__name__)
 
 
+def gen_subnet(ip, netmask):
+    """
+    Return the subnet for a given ip address and a netmask
+    @return (str): the subnet
+    @param ip: ip address
+    @param netmask: netmask
+    """
+    ip_array = ip.split(".")
+    mask_array = netmask.split(".")
+    result = []
+    for index in list(range(4)):
+        result.append(int(ip_array[index]) & int(mask_array[index]))
+
+    return ".".join([str(x) for x in result])
+
+
 class NicConfigurator(object):
-    def __init__(self, nics):
+    def __init__(self, nics, use_system_devices=True):
         """
         Initialize the Nic Configurator
         @param nics (list) an array of nics to configure
+        @param use_system_devices (Bool) Get the MAC names from the system
+        if this is True. If False, then mac names will be retrieved from
+         the specified nics.
         """
         self.nics = nics
         self.mac2Name = {}
         self.ipv4PrimaryGateway = None
         self.ipv6PrimaryGateway = None
-        self.find_devices()
+
+        if use_system_devices:
+            self.find_devices()
+        else:
+            for nic in self.nics:
+                self.mac2Name[nic.mac.lower()] = nic.name
+
         self._primaryNic = self.get_primary_nic()
 
     def get_primary_nic(self):
@@ -61,138 +87,163 @@ class NicConfigurator(object):
 
     def gen_one_nic(self, nic):
         """
-        Return the lines needed to configure a nic
-        @return (str list): the string list to configure the nic
+        Return the config list needed to configure a nic
+        @return (list): the subnets and routes list to configure the nic
         @param nic (NicBase): the nic to configure
         """
-        lines = []
-        name = self.mac2Name.get(nic.mac.lower())
+        mac = nic.mac.lower()
+        name = self.mac2Name.get(mac)
         if not name:
             raise ValueError('No known device has MACADDR: %s' % nic.mac)
 
-        if nic.onboot:
-            lines.append('auto %s' % name)
+        nics_cfg_list = []
+
+        cfg = {'type': 'physical', 'name': name, 'mac_address': mac}
+
+        subnet_list = []
+        route_list = []
 
         # Customize IPv4
-        lines.extend(self.gen_ipv4(name, nic))
+        (subnets, routes) = self.gen_ipv4(name, nic)
+        subnet_list.extend(subnets)
+        route_list.extend(routes)
 
         # Customize IPv6
-        lines.extend(self.gen_ipv6(name, nic))
+        (subnets, routes) = self.gen_ipv6(name, nic)
+        subnet_list.extend(subnets)
+        route_list.extend(routes)
+
+        cfg.update({'subnets': subnet_list})
 
-        lines.append('')
+        nics_cfg_list.append(cfg)
+        if route_list:
+            nics_cfg_list.extend(route_list)
 
-        return lines
+        return nics_cfg_list
 
     def gen_ipv4(self, name, nic):
         """
-        Return the lines needed to configure the IPv4 setting of a nic
-        @return (str list): the string list to configure the gateways
-        @param name (str): name of the nic
+        Return the set of subnets and routes needed to configure the
+        IPv4 settings of a nic
+        @return (set): the set of subnet and routes to configure the gateways
+        @param name (str): subnet and route list for the nic
         @param nic (NicBase): the nic to configure
         """
-        lines = []
+
+        subnet = {}
+        route_list = []
+
+        if nic.onboot:
+            subnet.update({'control': 'auto'})
 
         bootproto = nic.bootProto.lower()
         if nic.ipv4_mode.lower() == 'disabled':
             bootproto = 'manual'
-        lines.append('iface %s inet %s' % (name, bootproto))
 
         if bootproto != 'static':
-            return lines
+            subnet.update({'type': 'dhcp'})
+            return ([subnet], route_list)
+        else:
+            subnet.update({'type': 'static'})
 
         # Static Ipv4
         addrs = nic.staticIpv4
         if not addrs:
-            return lines
+            return ([subnet], route_list)
 
         v4 = addrs[0]
         if v4.ip:
-            lines.append('    address %s' % v4.ip)
+            subnet.update({'address': v4.ip})
         if v4.netmask:
-            lines.append('    netmask %s' % v4.netmask)
+            subnet.update({'netmask': v4.netmask})
 
         # Add the primary gateway
         if nic.primary and v4.gateways:
             self.ipv4PrimaryGateway = v4.gateways[0]
-            lines.append('    gateway %s metric 0' % self.ipv4PrimaryGateway)
-            return lines
+            subnet.update({'gateway': self.ipv4PrimaryGateway})
+            return [subnet]
 
         # Add routes if there is no primary nic
         if not self._primaryNic:
-            lines.extend(self.gen_ipv4_route(nic, v4.gateways))
+            route_list.extend(self.gen_ipv4_route(nic,
+                                                  v4.gateways,
+                                                  v4.netmask))
 
-        return lines
+        return ([subnet], route_list)
 
-    def gen_ipv4_route(self, nic, gateways):
+    def gen_ipv4_route(self, nic, gateways, netmask):
         """
-        Return the lines needed to configure additional Ipv4 route
-        @return (str list): the string list to configure the gateways
+        Return the routes list needed to configure additional Ipv4 route
+        @return (list): the route list to configure the gateways
         @param nic (NicBase): the nic to configure
         @param gateways (str list): the list of gateways
         """
-        lines = []
+        route_list = []
+
+        cidr = mask_to_net_prefix(netmask)
 
         for gateway in gateways:
-            lines.append('    up route add default gw %s metric 10000' %
-                         gateway)
+            destination = "%s/%d" % (gen_subnet(gateway, netmask), cidr)
+            route_list.append({'destination': destination,
+                               'type': 'route',
+                               'gateway': gateway,
+                               'metric': 10000})
 
-        return lines
+        return route_list
 
     def gen_ipv6(self, name, nic):
         """
-        Return the lines needed to configure the gateways for a nic
-        @return (str list): the string list to configure the gateways
+        Return the set of subnets and routes needed to configure the
+        gateways for a nic
+        @return (set): the set of subnets and routes to configure the gateways
         @param name (str): name of the nic
         @param nic (NicBase): the nic to configure
         """
-        lines = []
 
         if not nic.staticIpv6:
-            return lines
+            return ([], [])
 
+        subnet_list = []
         # Static Ipv6
         addrs = nic.staticIpv6
-        lines.append('iface %s inet6 static' % name)
-        lines.append('    address %s' % addrs[0].ip)
-        lines.append('    netmask %s' % addrs[0].netmask)
 
-        for addr in addrs[1:]:
-            lines.append('    up ifconfig %s inet6 add %s/%s' % (name, addr.ip,
-                                                                 addr.netmask))
-        # Add the primary gateway
-        if nic.primary:
-            for addr in addrs:
-                if addr.gateway:
-                    self.ipv6PrimaryGateway = addr.gateway
-                    lines.append('    gateway %s' % self.ipv6PrimaryGateway)
-                    return lines
+        for addr in addrs:
+            subnet = {'type': 'static6',
+                      'address': addr.ip,
+                      'netmask': addr.netmask}
+            subnet_list.append(subnet)
 
-        # Add routes if there is no primary nic
-        if not self._primaryNic:
-            lines.extend(self._genIpv6Route(name, nic, addrs))
+        # TODO: Add the primary gateway
+
+        route_list = []
+        # TODO: Add routes if there is no primary nic
+        # if not self._primaryNic:
+        #    route_list.extend(self._genIpv6Route(name, nic, addrs))
 
-        return lines
+        return (subnet_list, route_list)
 
     def _genIpv6Route(self, name, nic, addrs):
-        lines = []
+        route_list = []
 
         for addr in addrs:
-            lines.append('    up route -A inet6 add default gw '
-                         '%s metric 10000' % addr.gateway)
+            route_list.append({'type': 'route',
+                               'gateway': addr.gateway,
+                               'metric': 10000})
+
+        return route_list
 
-        return lines
+    def generate(self, configure=False, osfamily=None):
+        """Return the config elements that are needed to configure the nics"""
+        if configure:
+            logger.info("Configuring the interfaces file")
+            self.configure(osfamily)
 
-    def generate(self):
-        """Return the lines that is needed to configure the nics"""
-        lines = []
-        lines.append('iface lo inet loopback')
-        lines.append('auto lo')
-        lines.append('')
+        nics_cfg_list = []
 
         for nic in self.nics:
-            lines.extend(self.gen_one_nic(nic))
+            nics_cfg_list.extend(self.gen_one_nic(nic))
 
-        return lines
+        return nics_cfg_list
 
     def clear_dhcp(self):
         logger.info('Clearing DHCP leases')
@@ -201,11 +252,16 @@ class NicConfigurator(object):
         util.subp(["pkill", "dhclient"], rcs=[0, 1])
         util.subp(["rm", "-f", "/var/lib/dhcp/*"])
 
-    def configure(self):
+    def configure(self, osfamily=None):
         """
-        Configure the /etc/network/intefaces
+        Configure the /etc/network/interfaces
         Make a back up of the original
         """
+
+        if not osfamily or osfamily != "debian":
+            logger.info("Debian OS not detected. Skipping the configure step")
+            return
+
         containingDir = '/etc/network'
 
         interfaceFile = os.path.join(containingDir, 'interfaces')
@@ -215,10 +271,13 @@ class NicConfigurator(object):
         if not os.path.exists(originalFile) and os.path.exists(interfaceFile):
             os.rename(interfaceFile, originalFile)
 
-        lines = self.generate()
-        with open(interfaceFile, 'w') as fp:
-            for line in lines:
-                fp.write('%s\n' % line)
+        lines = [
+            "# DO NOT EDIT THIS FILE BY HAND --"
+            " AUTOMATICALLY GENERATED BY cloud-init",
+            "source /etc/network/interfaces.d/*.cfg",
+        ]
+
+        util.write_file(interfaceFile, content='\n'.join(lines))
 
         self.clear_dhcp()
 
diff --git a/cloudinit/sources/helpers/vmware/imc/config_passwd.py b/cloudinit/sources/helpers/vmware/imc/config_passwd.py
new file mode 100644
index 0000000..75cfbaa
--- /dev/null
+++ b/cloudinit/sources/helpers/vmware/imc/config_passwd.py
@@ -0,0 +1,67 @@
+#    Copyright (C) 2016 Canonical Ltd.
+#    Copyright (C) 2016 VMware INC.
+#
+#    Author: Maitreyee Saikia <msaikia@xxxxxxxxxx>
+#
+#    This file is part of cloud-init. See LICENSE file for license information.
+
+
+import logging
+import os
+
+from cloudinit import util
+
+LOG = logging.getLogger(__name__)
+
+
+class PasswordConfigurator(object):
+    """
+    Class for changing configurations related to passwords in a VM. Includes
+    setting and expiring passwords.
+    """
+    def configure(self, passwd, resetPasswd, distro):
+        """
+        Main method to perform all functionalities based on configuration file
+        inputs.
+        @param passwd: encoded admin password.
+        @param resetPasswd: boolean to determine if password needs to be reset.
+        @return cfg: dict to be used by cloud-init set_passwd code.
+        """
+        LOG.info('Starting password configuration')
+        if passwd:
+            passwd = util.b64d(passwd)
+        allRootUsers = []
+        for line in open('/etc/passwd', 'r'):
+            if line.split(':')[2] == '0':
+                allRootUsers.append(line.split(':')[0])
+        # read shadow file and check for each user, if its uid0 or root.
+        uidUsersList = []
+        for line in open('/etc/shadow', 'r'):
+            user = line.split(':')[0]
+            if user in allRootUsers:
+                uidUsersList.append(user)
+        if passwd:
+            LOG.info('Setting admin password')
+            distro.set_passwd('root', passwd)
+        if resetPasswd:
+            self.reset_password(uidUsersList)
+        LOG.info('Configure Password completed!')
+
+    def reset_password(self, uidUserList):
+        """
+        Method to reset password. Use passwd --expire command. Use chage if
+        not succeeded using passwd command. Log failure message otherwise.
+        @param: list of users for which to expire password.
+        """
+        LOG.info('Expiring password.')
+        for user in uidUserList:
+            try:
+                out, err = util.subp(['passwd', '--expire', user])
+            except util.ProcessExecutionError as e:
+                if os.path.exists('/usr/bin/chage'):
+                    out, e = util.subp(['chage', '-d', '0', user])
+                else:
+                    LOG.warning('Failed to expire password for %s with error: '
+                                '%s', user, e)
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/sources/helpers/vmware/imc/guestcust_util.py b/cloudinit/sources/helpers/vmware/imc/guestcust_util.py
index 1ab6bd4..4407525 100644
--- a/cloudinit/sources/helpers/vmware/imc/guestcust_util.py
+++ b/cloudinit/sources/helpers/vmware/imc/guestcust_util.py
@@ -59,14 +59,16 @@ def set_customization_status(custstate, custerror, errormessage=None):
     return (out, err)
 
 
-# This will read the file nics.txt in the specified directory
-# and return the content
-def get_nics_to_enable(dirpath):
-    if not dirpath:
+def get_nics_to_enable(nicsfilepath):
+    """Reads the NICS from the specified file path and returns the content
+
+    @param nicsfilepath: Absolute file path to the NICS.txt file.
+    """
+
+    if not nicsfilepath:
         return None
 
     NICS_SIZE = 1024
-    nicsfilepath = os.path.join(dirpath, "nics.txt")
     if not os.path.exists(nicsfilepath):
         return None
 
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index a1c4a51..d045268 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -821,28 +821,35 @@ class Modules(object):
         skipped = []
         forced = []
         overridden = self.cfg.get('unverified_modules', [])
+        active_mods = []
+        all_distros = set([distros.ALL_DISTROS])
         for (mod, name, _freq, _args) in mostly_mods:
-            worked_distros = set(mod.distros)
+            worked_distros = set(mod.distros)  # Minimally [] per fixup_modules
             worked_distros.update(
                 distros.Distro.expand_osfamily(mod.osfamilies))
 
-            # module does not declare 'distros' or lists this distro
-            if not worked_distros or d_name in worked_distros:
-                continue
-
-            if name in overridden:
-                forced.append(name)
-            else:
-                skipped.append(name)
+            # Skip only when the following conditions are all met:
+            #  - distros are defined in the module != ALL_DISTROS
+            #  - the current d_name isn't in distros
+            #  - and the module is unverified and not in the unverified_modules
+            #    override list
+            if worked_distros and worked_distros != all_distros:
+                if d_name not in worked_distros:
+                    if name not in overridden:
+                        skipped.append(name)
+                        continue
+                    forced.append(name)
+            active_mods.append([mod, name, _freq, _args])
 
         if skipped:
-            LOG.info("Skipping modules %s because they are not verified "
+            LOG.info("Skipping modules '%s' because they are not verified "
                      "on distro '%s'.  To run anyway, add them to "
-                     "'unverified_modules' in config.", skipped, d_name)
+                     "'unverified_modules' in config.",
+                     ','.join(skipped), d_name)
         if forced:
-            LOG.info("running unverified_modules: %s", forced)
+            LOG.info("running unverified_modules: '%s'", ', '.join(forced))
 
-        return self._run_modules(mostly_mods)
+        return self._run_modules(active_mods)
 
 
 def read_runtime_config():
diff --git a/cloudinit/temp_utils.py b/cloudinit/temp_utils.py
new file mode 100644
index 0000000..5d7adf7
--- /dev/null
+++ b/cloudinit/temp_utils.py
@@ -0,0 +1,101 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import contextlib
+import errno
+import os
+import shutil
+import tempfile
+
+_TMPDIR = None
+_ROOT_TMPDIR = "/run/cloud-init/tmp"
+_EXE_ROOT_TMPDIR = "/var/tmp/cloud-init"
+
+
+def _tempfile_dir_arg(odir=None, needs_exe=False):
+    """Return the proper 'dir' argument for tempfile functions.
+
+    When root, cloud-init will use /run/cloud-init/tmp to avoid
+    any cleaning that a distro boot might do on /tmp (such as
+    systemd-tmpfiles-clean).
+
+    If the caller of this function (mkdtemp or mkstemp) was provided
+    with a 'dir' argument, then that is respected.
+
+    @param odir: original 'dir' arg to 'mkdtemp' or other.
+    @param needs_exe: Boolean specifying whether or not exe permissions are
+        needed for tempdir. This is needed because /run is mounted noexec.
+    """
+    if odir is not None:
+        return odir
+
+    global _TMPDIR
+    if _TMPDIR:
+        return _TMPDIR
+
+    if needs_exe:
+        tdir = _EXE_ROOT_TMPDIR
+    elif os.getuid() == 0:
+        tdir = _ROOT_TMPDIR
+    else:
+        tdir = os.environ.get('TMPDIR', '/tmp')
+    if not os.path.isdir(tdir):
+        os.makedirs(tdir)
+        os.chmod(tdir, 0o1777)
+
+    _TMPDIR = tdir
+    return tdir
+
+
+def ExtendedTemporaryFile(**kwargs):
+    kwargs['dir'] = _tempfile_dir_arg(
+        kwargs.pop('dir', None), kwargs.pop('needs_exe', False))
+    fh = tempfile.NamedTemporaryFile(**kwargs)
+    # Replace its unlink with a quiet version
+    # that does not raise errors when the
+    # file to unlink has been unlinked elsewhere..
+
+    def _unlink_if_exists(path):
+        try:
+            os.unlink(path)
+        except OSError as e:
+            if e.errno != errno.ENOENT:
+                raise e
+
+    fh.unlink = _unlink_if_exists
+
+    # Add a new method that will unlink
+    # right 'now' but still lets the exit
+    # method attempt to remove it (which will
+    # not throw due to our del file being quiet
+    # about files that are not there)
+    def unlink_now():
+        fh.unlink(fh.name)
+
+    setattr(fh, 'unlink_now', unlink_now)
+    return fh
+
+
+@contextlib.contextmanager
+def tempdir(**kwargs):
+    # This seems like it was only added in python 3.2
+    # Make it since its useful...
+    # See: http://bugs.python.org/file12970/tempdir.patch
+    tdir = mkdtemp(**kwargs)
+    try:
+        yield tdir
+    finally:
+        shutil.rmtree(tdir)
+
+
+def mkdtemp(**kwargs):
+    kwargs['dir'] = _tempfile_dir_arg(
+        kwargs.pop('dir', None), kwargs.pop('needs_exe', False))
+    return tempfile.mkdtemp(**kwargs)
+
+
+def mkstemp(**kwargs):
+    kwargs['dir'] = _tempfile_dir_arg(
+        kwargs.pop('dir', None), kwargs.pop('needs_exe', False))
+    return tempfile.mkstemp(**kwargs)
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/tests/__init__.py b/cloudinit/tests/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/cloudinit/tests/__init__.py
diff --git a/tests/unittests/helpers.py b/cloudinit/tests/helpers.py
index 08c5c46..6f88a5b 100644
--- a/tests/unittests/helpers.py
+++ b/cloudinit/tests/helpers.py
@@ -82,6 +82,7 @@ def retarget_many_wrapper(new_base, am, old_func):
 
 
 class TestCase(unittest2.TestCase):
+
     def reset_global_state(self):
         """Reset any global state to its original settings.
 
@@ -100,9 +101,19 @@ class TestCase(unittest2.TestCase):
         util._LSB_RELEASE = {}
 
     def setUp(self):
-        super(unittest2.TestCase, self).setUp()
+        super(TestCase, self).setUp()
         self.reset_global_state()
 
+    def add_patch(self, target, attr, **kwargs):
+        """Patches specified target object and sets it as attr on test
+        instance also schedules cleanup"""
+        if 'autospec' not in kwargs:
+            kwargs['autospec'] = True
+        m = mock.patch(target, **kwargs)
+        p = m.start()
+        self.addCleanup(m.stop)
+        setattr(self, attr, p)
+
 
 class CiTestCase(TestCase):
     """This is the preferred test case base class unless user
@@ -150,6 +161,7 @@ class CiTestCase(TestCase):
 
 
 class ResourceUsingTestCase(CiTestCase):
+
     def setUp(self):
         super(ResourceUsingTestCase, self).setUp()
         self.resource_path = None
@@ -188,6 +200,7 @@ class ResourceUsingTestCase(CiTestCase):
 
 
 class FilesystemMockingTestCase(ResourceUsingTestCase):
+
     def setUp(self):
         super(FilesystemMockingTestCase, self).setUp()
         self.patched_funcs = ExitStack()
@@ -278,9 +291,10 @@ class FilesystemMockingTestCase(ResourceUsingTestCase):
         return root
 
 
-class HttprettyTestCase(TestCase):
+class HttprettyTestCase(CiTestCase):
     # necessary as http_proxy gets in the way of httpretty
     # https://github.com/gabrielfalcao/HTTPretty/issues/122
+
     def setUp(self):
         self.restore_proxy = os.environ.get('http_proxy')
         if self.restore_proxy is not None:
diff --git a/cloudinit/tests/test_simpletable.py b/cloudinit/tests/test_simpletable.py
new file mode 100644
index 0000000..96bc24c
--- /dev/null
+++ b/cloudinit/tests/test_simpletable.py
@@ -0,0 +1,100 @@
+# Copyright (C) 2017 Amazon.com, Inc. or its affiliates
+#
+# Author: Andrew Jorgensen <ajorgens@xxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+"""Tests that SimpleTable works just like PrettyTable for cloud-init.
+
+Not all possible PrettyTable cases are tested because we're not trying to
+reimplement the entire library, only the minimal parts we actually use.
+"""
+
+from cloudinit.simpletable import SimpleTable
+from cloudinit.tests.helpers import CiTestCase
+
+# Examples rendered by cloud-init using PrettyTable
+NET_DEVICE_FIELDS = (
+    'Device', 'Up', 'Address', 'Mask', 'Scope', 'Hw-Address')
+NET_DEVICE_ROWS = (
+    ('ens3', True, '172.31.4.203', '255.255.240.0', '.', '0a:1f:07:15:98:70'),
+    ('ens3', True, 'fe80::81f:7ff:fe15:9870/64', '.', 'link',
+        '0a:1f:07:15:98:70'),
+    ('lo', True, '127.0.0.1', '255.0.0.0', '.', '.'),
+    ('lo', True, '::1/128', '.', 'host', '.'),
+)
+NET_DEVICE_TABLE = """\
++--------+------+----------------------------+---------------+-------+-------------------+
+| Device |  Up  |          Address           |      Mask     | Scope |     Hw-Address    |
++--------+------+----------------------------+---------------+-------+-------------------+
+|  ens3  | True |        172.31.4.203        | 255.255.240.0 |   .   | 0a:1f:07:15:98:70 |
+|  ens3  | True | fe80::81f:7ff:fe15:9870/64 |       .       |  link | 0a:1f:07:15:98:70 |
+|   lo   | True |         127.0.0.1          |   255.0.0.0   |   .   |         .         |
+|   lo   | True |          ::1/128           |       .       |  host |         .         |
++--------+------+----------------------------+---------------+-------+-------------------+"""  # noqa: E501
+ROUTE_IPV4_FIELDS = (
+    'Route', 'Destination', 'Gateway', 'Genmask', 'Interface', 'Flags')
+ROUTE_IPV4_ROWS = (
+    ('0', '0.0.0.0', '172.31.0.1', '0.0.0.0', 'ens3', 'UG'),
+    ('1', '169.254.0.0', '0.0.0.0', '255.255.0.0', 'ens3', 'U'),
+    ('2', '172.31.0.0', '0.0.0.0', '255.255.240.0', 'ens3', 'U'),
+)
+ROUTE_IPV4_TABLE = """\
++-------+-------------+------------+---------------+-----------+-------+
+| Route | Destination |  Gateway   |    Genmask    | Interface | Flags |
++-------+-------------+------------+---------------+-----------+-------+
+|   0   |   0.0.0.0   | 172.31.0.1 |    0.0.0.0    |    ens3   |   UG  |
+|   1   | 169.254.0.0 |  0.0.0.0   |  255.255.0.0  |    ens3   |   U   |
+|   2   |  172.31.0.0 |  0.0.0.0   | 255.255.240.0 |    ens3   |   U   |
++-------+-------------+------------+---------------+-----------+-------+"""
+
+AUTHORIZED_KEYS_FIELDS = (
+    'Keytype', 'Fingerprint (md5)', 'Options', 'Comment')
+AUTHORIZED_KEYS_ROWS = (
+    ('ssh-rsa', '24:c7:41:49:47:12:31:a0:de:6f:62:79:9b:13:06:36', '-',
+        'ajorgens'),
+)
+AUTHORIZED_KEYS_TABLE = """\
++---------+-------------------------------------------------+---------+----------+
+| Keytype |                Fingerprint (md5)                | Options | Comment  |
++---------+-------------------------------------------------+---------+----------+
+| ssh-rsa | 24:c7:41:49:47:12:31:a0:de:6f:62:79:9b:13:06:36 |    -    | ajorgens |
++---------+-------------------------------------------------+---------+----------+"""  # noqa: E501
+
+# from prettytable import PrettyTable
+# pt = PrettyTable(('HEADER',))
+# print(pt)
+NO_ROWS_FIELDS = ('HEADER',)
+NO_ROWS_TABLE = """\
++--------+
+| HEADER |
++--------+
++--------+"""
+
+
+class TestSimpleTable(CiTestCase):
+
+    def test_no_rows(self):
+        """An empty table is rendered as PrettyTable would have done it."""
+        table = SimpleTable(NO_ROWS_FIELDS)
+        self.assertEqual(str(table), NO_ROWS_TABLE)
+
+    def test_net_dev(self):
+        """Net device info is rendered as it was with PrettyTable."""
+        table = SimpleTable(NET_DEVICE_FIELDS)
+        for row in NET_DEVICE_ROWS:
+            table.add_row(row)
+        self.assertEqual(str(table), NET_DEVICE_TABLE)
+
+    def test_route_ipv4(self):
+        """Route IPv4 info is rendered as it was with PrettyTable."""
+        table = SimpleTable(ROUTE_IPV4_FIELDS)
+        for row in ROUTE_IPV4_ROWS:
+            table.add_row(row)
+        self.assertEqual(str(table), ROUTE_IPV4_TABLE)
+
+    def test_authorized_keys(self):
+        """SSH authorized keys are rendered as they were with PrettyTable."""
+        table = SimpleTable(AUTHORIZED_KEYS_FIELDS)
+        for row in AUTHORIZED_KEYS_ROWS:
+            table.add_row(row)
+        self.assertEqual(str(table), AUTHORIZED_KEYS_TABLE)
diff --git a/cloudinit/tests/test_temp_utils.py b/cloudinit/tests/test_temp_utils.py
new file mode 100644
index 0000000..ffbb92c
--- /dev/null
+++ b/cloudinit/tests/test_temp_utils.py
@@ -0,0 +1,101 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Tests for cloudinit.temp_utils"""
+
+from cloudinit.temp_utils import mkdtemp, mkstemp
+from cloudinit.tests.helpers import CiTestCase, wrap_and_call
+
+
+class TestTempUtils(CiTestCase):
+
+    def test_mkdtemp_default_non_root(self):
+        """mkdtemp creates a dir under /tmp for the unprivileged."""
+        calls = []
+
+        def fake_mkdtemp(*args, **kwargs):
+            calls.append(kwargs)
+            return '/fake/return/path'
+
+        retval = wrap_and_call(
+            'cloudinit.temp_utils',
+            {'os.getuid': 1000,
+             'tempfile.mkdtemp': {'side_effect': fake_mkdtemp},
+             '_TMPDIR': {'new': None},
+             'os.path.isdir': True},
+            mkdtemp)
+        self.assertEqual('/fake/return/path', retval)
+        self.assertEqual([{'dir': '/tmp'}], calls)
+
+    def test_mkdtemp_default_non_root_needs_exe(self):
+        """mkdtemp creates a dir under /var/tmp/cloud-init when needs_exe."""
+        calls = []
+
+        def fake_mkdtemp(*args, **kwargs):
+            calls.append(kwargs)
+            return '/fake/return/path'
+
+        retval = wrap_and_call(
+            'cloudinit.temp_utils',
+            {'os.getuid': 1000,
+             'tempfile.mkdtemp': {'side_effect': fake_mkdtemp},
+             '_TMPDIR': {'new': None},
+             'os.path.isdir': True},
+            mkdtemp, needs_exe=True)
+        self.assertEqual('/fake/return/path', retval)
+        self.assertEqual([{'dir': '/var/tmp/cloud-init'}], calls)
+
+    def test_mkdtemp_default_root(self):
+        """mkdtemp creates a dir under /run/cloud-init for the privileged."""
+        calls = []
+
+        def fake_mkdtemp(*args, **kwargs):
+            calls.append(kwargs)
+            return '/fake/return/path'
+
+        retval = wrap_and_call(
+            'cloudinit.temp_utils',
+            {'os.getuid': 0,
+             'tempfile.mkdtemp': {'side_effect': fake_mkdtemp},
+             '_TMPDIR': {'new': None},
+             'os.path.isdir': True},
+            mkdtemp)
+        self.assertEqual('/fake/return/path', retval)
+        self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls)
+
+    def test_mkstemp_default_non_root(self):
+        """mkstemp creates secure tempfile under /tmp for the unprivileged."""
+        calls = []
+
+        def fake_mkstemp(*args, **kwargs):
+            calls.append(kwargs)
+            return '/fake/return/path'
+
+        retval = wrap_and_call(
+            'cloudinit.temp_utils',
+            {'os.getuid': 1000,
+             'tempfile.mkstemp': {'side_effect': fake_mkstemp},
+             '_TMPDIR': {'new': None},
+             'os.path.isdir': True},
+            mkstemp)
+        self.assertEqual('/fake/return/path', retval)
+        self.assertEqual([{'dir': '/tmp'}], calls)
+
+    def test_mkstemp_default_root(self):
+        """mkstemp creates a secure tempfile in /run/cloud-init for root."""
+        calls = []
+
+        def fake_mkstemp(*args, **kwargs):
+            calls.append(kwargs)
+            return '/fake/return/path'
+
+        retval = wrap_and_call(
+            'cloudinit.temp_utils',
+            {'os.getuid': 0,
+             'tempfile.mkstemp': {'side_effect': fake_mkstemp},
+             '_TMPDIR': {'new': None},
+             'os.path.isdir': True},
+            mkstemp)
+        self.assertEqual('/fake/return/path', retval)
+        self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls)
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/tests/test_url_helper.py b/cloudinit/tests/test_url_helper.py
new file mode 100644
index 0000000..b778a3a
--- /dev/null
+++ b/cloudinit/tests/test_url_helper.py
@@ -0,0 +1,40 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.url_helper import oauth_headers
+from cloudinit.tests.helpers import CiTestCase, mock, skipIf
+
+
+try:
+    import oauthlib
+    assert oauthlib  # avoid pyflakes error F401: import unused
+    _missing_oauthlib_dep = False
+except ImportError:
+    _missing_oauthlib_dep = True
+
+
+class TestOAuthHeaders(CiTestCase):
+
+    def test_oauth_headers_raises_not_implemented_when_oathlib_missing(self):
+        """oauth_headers raises a NotImplemented error when oauth absent."""
+        with mock.patch.dict('sys.modules', {'oauthlib': None}):
+            with self.assertRaises(NotImplementedError) as context_manager:
+                oauth_headers(1, 2, 3, 4, 5)
+        self.assertEqual(
+            'oauth support is not available',
+            str(context_manager.exception))
+
+    @skipIf(_missing_oauthlib_dep, "No python-oauthlib dependency")
+    @mock.patch('oauthlib.oauth1.Client')
+    def test_oauth_headers_calls_oathlibclient_when_available(self, m_client):
+        """oauth_headers calls oaut1.hClient.sign with the provided url."""
+        class fakeclient(object):
+            def sign(self, url):
+                # The first and 3rd item of the client.sign tuple are ignored
+                return ('junk', url, 'junk2')
+
+        m_client.return_value = fakeclient()
+
+        return_value = oauth_headers(
+            'url', 'consumer_key', 'token_key', 'token_secret',
+            'consumer_secret')
+        self.assertEqual('url', return_value)
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 7cf76aa..0e0f5b4 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -17,7 +17,6 @@ import time
 from email.utils import parsedate
 from functools import partial
 
-import oauthlib.oauth1 as oauth1
 from requests import exceptions
 
 from six.moves.urllib.parse import (
@@ -488,6 +487,11 @@ class OauthUrlHelper(object):
 
 def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret,
                   timestamp=None):
+    try:
+        import oauthlib.oauth1 as oauth1
+    except ImportError:
+        raise NotImplementedError('oauth support is not available')
+
     if timestamp:
         timestamp = str(timestamp)
     else:
diff --git a/cloudinit/util.py b/cloudinit/util.py
index ce2c603..e1290aa 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -12,7 +12,6 @@ import contextlib
 import copy as obj_copy
 import ctypes
 import email
-import errno
 import glob
 import grp
 import gzip
@@ -31,9 +30,10 @@ import stat
 import string
 import subprocess
 import sys
-import tempfile
 import time
 
+from errno import ENOENT, ENOEXEC
+
 from base64 import b64decode, b64encode
 from six.moves.urllib import parse as urlparse
 
@@ -44,6 +44,7 @@ from cloudinit import importer
 from cloudinit import log as logging
 from cloudinit import mergers
 from cloudinit import safeyaml
+from cloudinit import temp_utils
 from cloudinit import type_utils
 from cloudinit import url_helper
 from cloudinit import version
@@ -239,7 +240,10 @@ class ProcessExecutionError(IOError):
             self.cmd = cmd
 
         if not description:
-            self.description = 'Unexpected error while running command.'
+            if not exit_code and errno == ENOEXEC:
+                self.description = 'Exec format error. Missing #! in script?'
+            else:
+                self.description = 'Unexpected error while running command.'
         else:
             self.description = description
 
@@ -345,26 +349,6 @@ class DecompressionError(Exception):
     pass
 
 
-def ExtendedTemporaryFile(**kwargs):
-    fh = tempfile.NamedTemporaryFile(**kwargs)
-    # Replace its unlink with a quiet version
-    # that does not raise errors when the
-    # file to unlink has been unlinked elsewhere..
-    LOG.debug("Created temporary file %s", fh.name)
-    fh.unlink = del_file
-
-    # Add a new method that will unlink
-    # right 'now' but still lets the exit
-    # method attempt to remove it (which will
-    # not throw due to our del file being quiet
-    # about files that are not there)
-    def unlink_now():
-        fh.unlink(fh.name)
-
-    setattr(fh, 'unlink_now', unlink_now)
-    return fh
-
-
 def fork_cb(child_cb, *args, **kwargs):
     fid = os.fork()
     if fid == 0:
@@ -433,7 +417,7 @@ def read_conf(fname):
     try:
         return load_yaml(load_file(fname), default={})
     except IOError as e:
-        if e.errno == errno.ENOENT:
+        if e.errno == ENOENT:
             return {}
         else:
             raise
@@ -614,6 +598,8 @@ def system_info():
             var = 'ubuntu'
         elif linux_dist == 'redhat':
             var = 'rhel'
+        elif linux_dist == 'suse':
+            var = 'suse'
         else:
             var = 'linux'
     elif system in ('windows', 'darwin', "freebsd"):
@@ -786,18 +772,6 @@ def umask(n_msk):
         os.umask(old)
 
 
-@contextlib.contextmanager
-def tempdir(**kwargs):
-    # This seems like it was only added in python 3.2
-    # Make it since its useful...
-    # See: http://bugs.python.org/file12970/tempdir.patch
-    tdir = tempfile.mkdtemp(**kwargs)
-    try:
-        yield tdir
-    finally:
-        del_dir(tdir)
-
-
 def center(text, fill, max_len):
     return '{0:{fill}{align}{size}}'.format(text, fill=fill,
                                             align="^", size=max_len)
@@ -901,7 +875,7 @@ def read_file_or_url(url, timeout=5, retries=10,
             contents = load_file(file_path, decode=False)
         except IOError as e:
             code = e.errno
-            if e.errno == errno.ENOENT:
+            if e.errno == ENOENT:
                 code = url_helper.NOT_FOUND
             raise url_helper.UrlError(cause=e, code=code, headers=None,
                                       url=url)
@@ -1247,7 +1221,7 @@ def find_devs_with(criteria=None, oformat='device',
     try:
         (out, _err) = subp(cmd, rcs=[0, 2])
     except ProcessExecutionError as e:
-        if e.errno == errno.ENOENT:
+        if e.errno == ENOENT:
             # blkid not found...
             out = ""
         else:
@@ -1285,7 +1259,7 @@ def load_file(fname, read_cb=None, quiet=False, decode=True):
     except IOError as e:
         if not quiet:
             raise
-        if e.errno != errno.ENOENT:
+        if e.errno != ENOENT:
             raise
     contents = ofh.getvalue()
     LOG.debug("Read %s bytes from %s", len(contents), fname)
@@ -1583,7 +1557,7 @@ def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True):
         mtypes = ['']
 
     mounted = mounts()
-    with tempdir() as tmpd:
+    with temp_utils.tempdir() as tmpd:
         umount = False
         if os.path.realpath(device) in mounted:
             mountpoint = mounted[os.path.realpath(device)]['mountpoint']
@@ -1653,7 +1627,7 @@ def del_file(path):
     try:
         os.unlink(path)
     except OSError as e:
-        if e.errno != errno.ENOENT:
+        if e.errno != ENOENT:
             raise e
 
 
@@ -1770,6 +1744,31 @@ def delete_dir_contents(dirname):
             del_file(node_fullpath)
 
 
+def subp_blob_in_tempfile(blob, *args, **kwargs):
+    """Write blob to a tempfile, and call subp with args, kwargs. Then cleanup.
+
+    'basename' as a kwarg allows providing the basename for the file.
+    The 'args' argument to subp will be updated with the full path to the
+    filename as the first argument.
+    """
+    basename = kwargs.pop('basename', "subp_blob")
+
+    if len(args) == 0 and 'args' not in kwargs:
+        args = [tuple()]
+
+    # Use tmpdir over tmpfile to avoid 'text file busy' on execute
+    with temp_utils.tempdir(needs_exe=True) as tmpd:
+        tmpf = os.path.join(tmpd, basename)
+        if 'args' in kwargs:
+            kwargs['args'] = [tmpf] + list(kwargs['args'])
+        else:
+            args = list(args)
+            args[0] = [tmpf] + args[0]
+
+        write_file(tmpf, blob, mode=0o700)
+        return subp(*args, **kwargs)
+
+
 def subp(args, data=None, rcs=None, env=None, capture=True, shell=False,
          logstring=False, decode="replace", target=None, update_env=None):
 
@@ -2281,7 +2280,7 @@ def pathprefix2dict(base, required=None, optional=None, delim=os.path.sep):
         try:
             ret[f] = load_file(base + delim + f, quiet=False, decode=False)
         except IOError as e:
-            if e.errno != errno.ENOENT:
+            if e.errno != ENOENT:
                 raise
             if f in required:
                 missing.append(f)
diff --git a/cloudinit/version.py b/cloudinit/version.py
index dff4af0..3255f39 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,7 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-__VERSION__ = "0.7.9"
+__VERSION__ = "17.1"
 
 FEATURES = [
     # supports network config version 1
diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
index f4b9069..32de9c9 100644
--- a/config/cloud.cfg.tmpl
+++ b/config/cloud.cfg.tmpl
@@ -45,9 +45,6 @@ datasource_list: ['ConfigDrive', 'Azure', 'OpenStack', 'Ec2']
 # The modules that run in the 'init' stage
 cloud_init_modules:
  - migrator
-{% if variant in ["ubuntu", "unknown", "debian"] %}
- - ubuntu-init-switch
-{% endif %}
  - seed_random
  - bootcmd
  - write-files
@@ -87,6 +84,9 @@ cloud_config_modules:
  - apt-pipelining
  - apt-configure
 {% endif %}
+{% if variant in ["suse"] %}
+ - zypper-add-repo
+{% endif %}
 {% if variant not in ["freebsd"] %}
  - ntp
 {% endif %}
@@ -130,7 +130,7 @@ cloud_final_modules:
 # (not accessible to handlers/transforms)
 system_info:
    # This will affect which distro class gets used
-{% if variant in ["centos", "debian", "fedora", "rhel", "ubuntu", "freebsd"] %}
+{% if variant in ["centos", "debian", "fedora", "rhel", "suse", "ubuntu", "freebsd"] %}
    distro: {{ variant }}
 {% else %}
    # Unknown/fallback distro.
@@ -166,13 +166,17 @@ system_info:
          primary: http://ports.ubuntu.com/ubuntu-ports
          security: http://ports.ubuntu.com/ubuntu-ports
    ssh_svcname: ssh
-{% elif variant in ["centos", "rhel", "fedora"] %}
+{% elif variant in ["centos", "rhel", "fedora", "suse"] %}
    # Default user name + that default users groups (if added/used)
    default_user:
      name: {{ variant }}
      lock_passwd: True
      gecos: {{ variant }} Cloud User
+{% if variant == "suse" %}
+     groups: [cdrom, users]
+{% else %}
      groups: [wheel, adm, systemd-journal]
+{% endif %}
      sudo: ["ALL=(ALL) NOPASSWD:ALL"]
      shell: /bin/bash
    # Other config here will be given to the distro class and/or path classes
diff --git a/debian/changelog b/debian/changelog
index e47b98c..164e1be 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,12 +1,125 @@
-cloud-init (0.7.9-233-ge586fe35-0ubuntu1~17.04.3) UNRELEASED; urgency=medium
+cloud-init (17.1-17-g45d361cb-0ubuntu1~17.04.1) zesty-proposed; urgency=medium
 
   * drop the following cherry picks, now incorporated in snapshot.
     + debian/patches/cpick-a2f8ce9c-Do-not-provide-systemd-fsck-drop...
   * debian/copyright: dep5 updates, reorganize, add Apache 2.0 license.
     (LP: #1718681)
   * debian/control: drop dependency on python3-prettytable
+  * New upstream snapshot (LP: #1721847)
+    - net: Handle bridge stp values of 0 and convert to boolean type
+      [Chad Smith]
+    - tools: Give specific --abbrev=8 to "git describe"
+    - network: bridge_stp value not always correct [Ryan Harper]
+    - tests: re-enable tox with nocloud-kvm support [Joshua Powers]
+    - systemd: remove limit on tasks created by cloud-init-final.service.
+      [Robert Schweikert]
+    - suse: Support addition of zypper repos via cloud-config.
+      [Robert Schweikert]
+    - tests: Combine integration configs and testcases [Joshua Powers]
+    - Azure, CloudStack: Support reading dhcp options from systemd-networkd.
+      [Dimitri John Ledkov]
+    - packages/debian/copyright: remove mention of boto and MIT license
+    - systemd: only mention Before=apt-daily.service on debian based distros.
+      [Robert Schweikert]
+    - Add missing simpletable and simpletable tests for failed merge
+      [Chad Smith]
+    - Remove prettytable dependency, introduce simpletable [Andrew Jorgensen]
+    - debian/copyright: dep5 updates, reorganize, add Apache 2.0 license.
+      [Joshua Powers]
+    - tests: remove dependency on shlex [Joshua Powers]
+    - AltCloud: Trust PATH for udevadm and modprobe.
+    - DataSourceOVF: use util.find_devs_with(TYPE=iso9660)
+      [Ryan Harper]
+    - tests: remove a temp file used in bootcmd tests.
+    - release 17.1
+    - doc: document GCE datasource. [Arnd Hannemann]
+    - suse: updates to templates to support openSUSE and SLES.
+      [Robert Schweikert]
+    - suse: Copy sysvinit files from redhat with slight changes.
+      [Robert Schweikert]
+    - docs: fix sphinx module schema documentation [Chad Smith]
+    - tests: Add cloudinit package to all test targets [Chad Smith]
+    - Makefile: No longer look for yaml files in obsolete ./bin/.
+    - tests: fix ds-identify unit tests to set EC2_STRICT_ID_DEFAULT.
+    - ec2: Fix maybe_perform_dhcp_discovery to use /var/tmp as a tmpdir
+      [Chad Smith]
+    - Azure: wait longer for SSH pub keys to arrive.
+      [Paul Meyer]
+    - GCE: Fix usage of user-data.
+    - cmdline: add collect-logs subcommand. [Chad Smith]
+    - CloudStack: consider dhclient lease files named with a hyphen.
+    - resizefs: Drop check for read-only device file, do not warn on
+      overlayroot. [Chad Smith]
+    - tests: Enable the NoCloud KVM platform [Joshua Powers]
+    - resizefs: pass mount point to xfs_growfs [Dusty Mabe]
+    - vmware: Enable nics before sending the SUCCESS event. [Sankar Tanguturi]
+    - cloud-config modules: honor distros definitions in each module
+      [Chad Smith]
+    - chef: Add option to pin chef omnibus install version
+      [Ethan Apodaca]
+    - tests: execute: support command as string [Joshua Powers]
+    - schema and docs: Add jsonschema to resizefs and bootcmd modules
+      [Chad Smith]
+    - tools: Add xkvm script, wrapper around qemu-system [Joshua Powers]
+    - vmware customization: return network config format
+      [Sankar Tanguturi]
+    - Ec2: only attempt to operate at local mode on known platforms.
+    - Use /run/cloud-init for tempfile operations.
+    - ds-identify: Make OpenStack return maybe on arch other than intel.
+    - tests: mock missed openstack metadata uri network_data.json
+      [Chad Smith]
+    - relocate tests/unittests/helpers.py to cloudinit/tests
+      [Lars Kellogg-Stedman]
+    - tox: add nose timer output [Joshua Powers]
+    - upstart: do not package upstart jobs, drop ubuntu-init-switch module.
+    - tests: Stop leaking calls through unmocked metadata addresses
+      [Chad Smith]
+    - distro: allow distro to specify a default locale [Ryan Harper]
+    - tests: fix two recently added tests for sles distro.
+    - url_helper: dynamically import oauthlib import from inside oauth_headers
+      [Chad Smith]
+    - tox: make xenial environment run with python3.6
+    - suse: Add support for openSUSE and return SLES to a working state.
+      [Robert Schweikert]
+    - GCE: Add a main to the GCE Datasource.
+    - ec2: Add IPv6 dhcp support to Ec2DataSource. [Chad Smith]
+    - url_helper: fail gracefully if oauthlib is not available
+      [Lars Kellogg-Stedman]
+    - cloud-init analyze: fix issues running under python 2. [Andrew Jorgensen]
+    - Configure logging module to always use UTC time.
+      [Ryan Harper]
+    - Log a helpful message if a user script does not include shebang.
+      [Andrew Jorgensen]
+    - cli: Fix command line parsing of coniditionally loaded subcommands.
+      [Chad Smith]
+    - doc: Explain error behavior in user data include file format.
+      [Jason Butz]
+    - cc_landscape & cc_puppet: Fix six.StringIO use in writing configs
+      [Chad Smith]
+    - schema cli: Add schema subcommand to cloud-init cli and cc_runcmd schema
+      [Chad Smith]
+    - Debian: Remove non-free repositories from apt sources template.
+      [Joonas Kylmälä]
+    - tools: Add tooling for basic cloud-init performance analysis.
+      [Chad Smith]
+    - network: add v2 passthrough and fix parsing v2 config with bonds/bridge
+      params [Ryan Harper]
+    - doc: update capabilities with features available, link doc reference,
+      cli example [Ryan Harper]
+    - vcloud directory: Guest Customization support for passwords
+      [Maitreyee Saikia]
+    - ec2: Allow Ec2 to run in init-local using dhclient in a sandbox.
+      [Chad Smith]
+    - cc_ntp: fallback on timesyncd configuration if ntp is not installable
+      [Ryan Harper]
+    - net: Reduce duplicate code. Have get_interfaces_by_mac use
+      get_interfaces.
+    - tests: Fix build tree integration tests [Joshua Powers]
+    - sysconfig: Dont repeat header when rendering resolv.conf
+      [Ryan Harper]
+    - archlinux: Fix bug with empty dns, do not render 'lo' devices.
 
- -- Scott Moser <smoser@xxxxxxxxxx>  Mon, 18 Sep 2017 17:00:21 -0400
+ -- Chad Smith <chad.smith@xxxxxxxxxxxxx>  Fri, 06 Oct 2017 14:42:58 -0600
 
 cloud-init (0.7.9-233-ge586fe35-0ubuntu1~17.04.2) zesty; urgency=medium
 
diff --git a/doc/examples/cloud-config-chef.txt b/doc/examples/cloud-config-chef.txt
index 9d23581..58d5fdc 100644
--- a/doc/examples/cloud-config-chef.txt
+++ b/doc/examples/cloud-config-chef.txt
@@ -94,6 +94,10 @@ chef:
  # if install_type is 'omnibus', change the url to download
  omnibus_url: "https://www.chef.io/chef/install.sh";
 
+ # if install_type is 'omnibus', pass pinned version string
+ # to the install script
+ omnibus_version: "12.3.0"
+
 
 # Capture all subprocess output into a logfile
 # Useful for troubleshooting cloud-init issues
diff --git a/doc/rtd/index.rst b/doc/rtd/index.rst
index a691103..de67f36 100644
--- a/doc/rtd/index.rst
+++ b/doc/rtd/index.rst
@@ -40,6 +40,7 @@ initialization of a cloud instance.
    topics/merging.rst
    topics/network-config.rst
    topics/vendordata.rst
+   topics/debugging.rst
    topics/moreinfo.rst
    topics/hacking.rst
    topics/tests.rst
diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst
index 2c8770b..31eaba5 100644
--- a/doc/rtd/topics/capabilities.rst
+++ b/doc/rtd/topics/capabilities.rst
@@ -31,19 +31,49 @@ support. This allows other applications to detect what features the installed
 cloud-init supports without having to parse its version number. If present,
 this list of features will be located at ``cloudinit.version.FEATURES``.
 
-When checking if cloud-init supports a feature, in order to not break the
-detection script on older versions of cloud-init without the features list, a
-script similar to the following should be used. Note that this will exit 0 if
-the feature is supported and 1 otherwise::
+Currently defined feature names include:
 
-    import sys
-    from cloudinit import version
-    sys.exit('<FEATURE_NAME>' not in getattr(version, 'FEATURES', []))
+ - ``NETWORK_CONFIG_V1`` support for v1 networking configuration,
+   see :ref:`network_config_v1` documentation for examples.
+ - ``NETWORK_CONFIG_V2`` support for v2 networking configuration,
+   see :ref:`network_config_v2` documentation for examples.
 
-Currently defined feature names include:
 
- - ``NETWORK_CONFIG_V1`` support for v1 networking configuration, see curtin
-   documentation for examples.
+CLI Interface :
+
+``cloud-init features`` will print out each feature supported.  If cloud-init
+does not have the features subcommand, it also does not support any features
+described in this document.
+
+.. code-block:: bash
+
+  % cloud-init --help
+  usage: cloud-init [-h] [--version] [--file FILES] [--debug] [--force]
+                    {init,modules,query,single,dhclient-hook,features} ...
+
+  optional arguments:
+    -h, --help            show this help message and exit
+    --version, -v         show program's version number and exit
+    --file FILES, -f FILES
+                          additional yaml configuration files to use
+    --debug, -d           show additional pre-action logging (default: False)
+    --force               force running even if no datasource is found (use at
+                          your own risk)
+
+  Subcommands:
+    {init,modules,single,dhclient-hook,features,analyze,devel}
+      init                initializes cloud-init and performs initial modules
+      modules             activates modules using a given configuration key
+      single              run a single module
+      dhclient-hook       run the dhclient hookto record network info
+      features            list defined features
+      analyze             Devel tool: Analyze cloud-init logs and data
+      devel               Run development tools
+
+  % cloud-init features
+  NETWORK_CONFIG_V1
+  NETWORK_CONFIG_V2
+
 
 .. _Cloud-init: https://launchpad.net/cloud-init
 .. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index a60f5eb..7e2854d 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -94,5 +94,6 @@ Follow for more information.
    datasources/ovf.rst
    datasources/smartos.rst
    datasources/fallback.rst
+   datasources/gce.rst
 
 .. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/gce.rst b/doc/rtd/topics/datasources/gce.rst
new file mode 100644
index 0000000..8406695
--- /dev/null
+++ b/doc/rtd/topics/datasources/gce.rst
@@ -0,0 +1,20 @@
+.. _datasource_gce:
+
+Google Compute Engine
+=====================
+
+The GCE datasource gets its data from the internal compute metadata server.
+Metadata can be queried at the URL
+'``http://metadata.google.internal/computeMetadata/v1/``'
+from within an instance.  For more information see the `GCE metadata docs`_.
+
+Currently the default project and instance level metadatakeys keys
+``project/attributes/sshKeys`` and ``instance/attributes/ssh-keys`` are merged
+to provide ``public-keys``.
+
+``user-data`` and ``user-data-encoding`` can be provided to cloud-init by
+setting those custom metadata keys for an *instance*.
+
+.. _GCE metadata docs: https://cloud.google.com/compute/docs/storing-retrieving-metadata#querying
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/debugging.rst b/doc/rtd/topics/debugging.rst
new file mode 100644
index 0000000..4e43dd5
--- /dev/null
+++ b/doc/rtd/topics/debugging.rst
@@ -0,0 +1,146 @@
+**********************
+Testing and debugging cloud-init
+**********************
+
+Overview
+========
+This topic will discuss general approaches for test and debug of cloud-init on
+deployed instances.
+
+
+Boot Time Analysis - cloud-init analyze
+======================================
+Occasionally instances don't appear as performant as we would like and
+cloud-init packages a simple facility to inspect what operations took
+cloud-init the longest during boot and setup.
+
+The script **/usr/bin/cloud-init** has an analyze sub-command **analyze**
+which parses any cloud-init.log file into formatted and sorted events. It
+allows for detailed analysis of the most costly cloud-init operations are to
+determine the long-pole in cloud-init configuration and setup. These
+subcommands default to reading /var/log/cloud-init.log.
+
+* ``analyze show`` Parse and organize cloud-init.log events by stage and
+include each sub-stage granularity with time delta reports.
+
+.. code-block:: bash
+
+    $ cloud-init analyze show -i my-cloud-init.log
+    -- Boot Record 01 --
+    The total time elapsed since completing an event is printed after the "@"
+    character.
+    The time the event takes is printed after the "+" character.
+
+    Starting stage: modules-config
+    |`->config-emit_upstart ran successfully @05.47600s +00.00100s
+    |`->config-snap_config ran successfully @05.47700s +00.00100s
+    |`->config-ssh-import-id ran successfully @05.47800s +00.00200s
+    |`->config-locale ran successfully @05.48000s +00.00100s
+    ...
+
+
+* ``analyze dump`` Parse cloud-init.log into event records and return a list of
+dictionaries that can be consumed for other reporting needs.
+
+.. code-block:: bash
+
+    $ cloud-init analyze blame -i my-cloud-init.log
+    [
+     {
+      "description": "running config modules",
+      "event_type": "start",
+      "name": "modules-config",
+      "origin": "cloudinit",
+      "timestamp": 1510807493.0
+     },...
+
+* ``analyze blame`` Parse cloud-init.log into event records and sort them based
+on highest time cost for quick assessment of areas of cloud-init that may need
+improvement.
+
+.. code-block:: bash
+
+    $ cloud-init analyze blame -i my-cloud-init.log
+    -- Boot Record 11 --
+         00.01300s (modules-final/config-scripts-per-boot)
+         00.00400s (modules-final/config-final-message)
+         00.00100s (modules-final/config-rightscale_userdata)
+         ...
+
+
+Analyze quickstart - LXC
+---------------------------
+To quickly obtain a cloud-init log try using lxc on any ubuntu system:
+
+.. code-block:: bash
+
+  $ lxc init ubuntu-daily:xenial x1
+  $ lxc start x1
+  # Take lxc's cloud-init.log and pipe it to the analyzer
+  $ lxc file pull x1/var/log/cloud-init.log - | cloud-init analyze dump -i -
+  $ lxc file pull x1/var/log/cloud-init.log - | \
+  python3 -m cloudinit.analyze dump -i -
+
+Analyze quickstart - KVM
+---------------------------
+To quickly analyze a KVM a cloud-init log:
+
+1. Download the current cloud image
+  wget https://cloud-images.ubuntu.com/daily/server/xenial/current/xenial-server-cloudimg-amd64.img
+2. Create a snapshot image to preserve the original cloud-image
+
+.. code-block:: bash
+
+    $ qemu-img create -b xenial-server-cloudimg-amd64.img -f qcow2 \
+    test-cloudinit.qcow2
+
+3. Create a seed image with metadata using `cloud-localds`
+
+.. code-block:: bash
+
+    $ cat > user-data <<EOF
+      #cloud-config
+      password: passw0rd
+      chpasswd: { expire: False }
+      EOF
+    $  cloud-localds my-seed.img user-data
+
+4. Launch your modified VM
+
+.. code-block:: bash
+
+    $  kvm -m 512 -net nic -net user -redir tcp:2222::22 \
+   -drive file=test-cloudinit.qcow2,if=virtio,format=qcow2 \
+   -drive file=my-seed.img,if=virtio,format=raw
+
+5. Analyze the boot (blame, dump, show)
+
+.. code-block:: bash
+
+    $ ssh -p 2222 ubuntu@localhost 'cat /var/log/cloud-init.log' | \
+   cloud-init analyze blame -i -
+
+
+Running single cloud config modules
+===================================
+This subcommand is not called by the init system. It can be called manually to
+load the configured datasource and run a single cloud-config module once using
+the cached userdata and metadata after the instance has booted. Each
+cloud-config module has a module FREQUENCY configured: PER_INSTANCE, PER_BOOT,
+PER_ONCE or PER_ALWAYS. When a module is run by cloud-init, it stores a
+semaphore file in
+``/var/lib/cloud/instance/sem/config_<module_name>.<frequency>`` which marks
+when the module last successfully ran. Presence of this semaphore file
+prevents a module from running again if it has already been run. To ensure that
+a module is run again, the desired frequency can be overridden on the
+commandline:
+
+.. code-block:: bash
+
+  $ sudo cloud-init single --name cc_ssh --frequency always
+  ...
+  Generating public/private ed25519 key pair
+  ...
+
+Inspect cloud-init.log for output of what operations were performed as a
+result.
diff --git a/doc/rtd/topics/format.rst b/doc/rtd/topics/format.rst
index 436eb00..e25289a 100644
--- a/doc/rtd/topics/format.rst
+++ b/doc/rtd/topics/format.rst
@@ -85,6 +85,7 @@ This content is a ``include`` file.
 The file contains a list of urls, one per line.
 Each of the URLs will be read, and their content will be passed through this same set of rules.
 Ie, the content read from the URL can be gzipped, mime-multi-part, or plain text.
+If an error occurs reading a file the remaining files will not be read.
 
 Begins with: ``#include`` or ``Content-Type: text/x-include-url``  when using a MIME archive.
 
diff --git a/doc/rtd/topics/modules.rst b/doc/rtd/topics/modules.rst
index c963c09..cdb0f41 100644
--- a/doc/rtd/topics/modules.rst
+++ b/doc/rtd/topics/modules.rst
@@ -50,7 +50,6 @@ Modules
 .. automodule:: cloudinit.config.cc_ssh_authkey_fingerprints
 .. automodule:: cloudinit.config.cc_ssh_import_id
 .. automodule:: cloudinit.config.cc_timezone
-.. automodule:: cloudinit.config.cc_ubuntu_init_switch
 .. automodule:: cloudinit.config.cc_update_etc_hosts
 .. automodule:: cloudinit.config.cc_update_hostname
 .. automodule:: cloudinit.config.cc_users_groups
diff --git a/packages/bddeb b/packages/bddeb
index 609a94f..4f2e2dd 100755
--- a/packages/bddeb
+++ b/packages/bddeb
@@ -21,8 +21,9 @@ def find_root():
 if "avoid-pep8-E402-import-not-top-of-file":
     # Use the util functions from cloudinit
     sys.path.insert(0, find_root())
-    from cloudinit import templater
     from cloudinit import util
+    from cloudinit import temp_utils
+    from cloudinit import templater
 
 DEBUILD_ARGS = ["-S", "-d"]
 
@@ -112,8 +113,7 @@ def get_parser():
     parser.add_argument("--init-system", dest="init_system",
                         help=("build deb with INIT_SYSTEM=xxx"
                               " (default: %(default)s"),
-                        default=os.environ.get("INIT_SYSTEM",
-                                               "upstart,systemd"))
+                        default=os.environ.get("INIT_SYSTEM", "systemd"))
 
     parser.add_argument("--release", dest="release",
                         help=("build with changelog referencing RELEASE"),
@@ -149,7 +149,7 @@ def main():
         capture = False
 
     templ_data = {'debian_release': args.release}
-    with util.tempdir() as tdir:
+    with temp_utils.tempdir() as tdir:
 
         # output like 0.7.6-1022-g36e92d3
         ver_data = read_version()
diff --git a/packages/debian/copyright b/packages/debian/copyright
index c9c7d23..598cda1 100644
--- a/packages/debian/copyright
+++ b/packages/debian/copyright
@@ -1,33 +1,28 @@
-Format-Specification: http://svn.debian.org/wsvn/dep/web/deps/dep5.mdwn?op=file&rev=135
-Name: cloud-init
-Maintainer: Scott Moser <scott.moser@xxxxxxxxxxxxx>
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: cloud-init
+Upstream-Contact: cloud-init-dev@xxxxxxxxxxxxxxxxxxx
 Source: https://launchpad.net/cloud-init
 
-This package was debianized by Soren Hansen <soren@xxxxxxxxxx> on
-Thu, 04 Sep 2008 12:49:15 +0200 as ec2-init.  It was later renamed to
-cloud-init by Scott Moser <scott.moser@xxxxxxxxxxxxx>
-
-Upstream Author: Scott Moser <smoser@xxxxxxxxxxxxx>
-    Soren Hansen <soren@xxxxxxxxxxxxx>
-    Chuck Short <chuck.short@xxxxxxxxxxxxx>
-
-Copyright: 2010, Canonical Ltd. 
+Files: *
+Copyright: 2010, Canonical Ltd.
 License: GPL-3 or Apache-2.0
+
 License: GPL-3
  This program is free software: you can redistribute it and/or modify
  it under the terms of the GNU General Public License version 3, as
  published by the Free Software Foundation.
-
+ .
  This program is distributed in the hope that it will be useful,
  but WITHOUT ANY WARRANTY; without even the implied warranty of
  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
  GNU General Public License for more details.
-
+ .
  You should have received a copy of the GNU General Public License
  along with this program.  If not, see <http://www.gnu.org/licenses/>.
-
+ .
  The complete text of the GPL version 3 can be seen in
  /usr/share/common-licenses/GPL-3.
+
 License: Apache-2.0
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
diff --git a/packages/debian/dirs b/packages/debian/dirs
index 9a633c6..1315cf8 100644
--- a/packages/debian/dirs
+++ b/packages/debian/dirs
@@ -1,6 +1,5 @@
 var/lib/cloud
 usr/bin
-etc/init
 usr/share/doc/cloud
 etc/cloud
 lib/udev/rules.d
diff --git a/packages/debian/rules.in b/packages/debian/rules.in
index 053b764..4aa907e 100755
--- a/packages/debian/rules.in
+++ b/packages/debian/rules.in
@@ -1,6 +1,6 @@
 ## template:basic
 #!/usr/bin/make -f
-INIT_SYSTEM ?= upstart,systemd
+INIT_SYSTEM ?= systemd
 export PYBUILD_INSTALL_ARGS=--init-system=$(INIT_SYSTEM)
 PYVER ?= python${pyver}
 
@@ -10,6 +10,7 @@ PYVER ?= python${pyver}
 override_dh_install:
 	dh_install
 	install -d debian/cloud-init/etc/rsyslog.d
+	install -d debian/cloud-init/usr/share/apport/package-hooks
 	cp tools/21-cloudinit.conf debian/cloud-init/etc/rsyslog.d/21-cloudinit.conf
 	install -D ./tools/Z99-cloud-locale-test.sh debian/cloud-init/etc/profile.d/Z99-cloud-locale-test.sh
 	install -D ./tools/Z99-cloudinit-warnings.sh debian/cloud-init/etc/profile.d/Z99-cloudinit-warnings.sh
diff --git a/packages/pkg-deps.json b/packages/pkg-deps.json
index 822d29d..72409dd 100644
--- a/packages/pkg-deps.json
+++ b/packages/pkg-deps.json
@@ -34,9 +34,6 @@
          "jsonschema" : {
             "3" : "python34-jsonschema"
          },
-         "prettytable" : {
-            "3" : "python34-prettytable"
-         },
          "pyflakes" : {
             "2" : "pyflakes",
             "3" : "python34-pyflakes"
diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in
index d995b85..6ab0d20 100644
--- a/packages/redhat/cloud-init.spec.in
+++ b/packages/redhat/cloud-init.spec.in
@@ -115,12 +115,6 @@ rm -rf $RPM_BUILD_ROOT%{python_sitelib}/tests
 mkdir -p $RPM_BUILD_ROOT/%{_sharedstatedir}/cloud
 mkdir -p $RPM_BUILD_ROOT/%{_libexecdir}/%{name}
 
-# LP: #1691489: Remove systemd-fsck dropin (currently not expected to work)
-%if "%{init_system}" == "systemd"
-rm $RPM_BUILD_ROOT/usr/lib/systemd/system/systemd-fsck@.service.d/cloud-init.conf
-%endif
-
-
 %clean
 rm -rf $RPM_BUILD_ROOT
 
diff --git a/requirements.txt b/requirements.txt
index 61d1e90..dd10d85 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -3,9 +3,6 @@
 # Used for untemplating any files or strings with parameters.
 jinja2
 
-# This is used for any pretty printing of tabular data.
-PrettyTable
-
 # This one is currently only used by the MAAS datasource. If that
 # datasource is removed, this is no longer needed
 oauthlib
diff --git a/setup.py b/setup.py
index 5c65c7f..bf697d7 100755
--- a/setup.py
+++ b/setup.py
@@ -121,11 +121,11 @@ INITSYS_FILES = {
     'sysvinit_freebsd': [f for f in glob('sysvinit/freebsd/*') if is_f(f)],
     'sysvinit_deb': [f for f in glob('sysvinit/debian/*') if is_f(f)],
     'sysvinit_openrc': [f for f in glob('sysvinit/gentoo/*') if is_f(f)],
+    'sysvinit_suse': [f for f in glob('sysvinit/suse/*') if is_f(f)],
     'systemd': [render_tmpl(f)
                 for f in (glob('systemd/*.tmpl') +
                           glob('systemd/*.service') +
                           glob('systemd/*.target')) if is_f(f)],
-    'systemd.fsck-dropin': ['systemd/systemd-fsck@.service.d/cloud-init.conf'],
     'systemd.generators': [f for f in glob('systemd/*-generator') if is_f(f)],
     'upstart': [f for f in glob('upstart/*') if is_f(f)],
 }
@@ -134,10 +134,8 @@ INITSYS_ROOTS = {
     'sysvinit_freebsd': 'usr/local/etc/rc.d',
     'sysvinit_deb': 'etc/init.d',
     'sysvinit_openrc': 'etc/init.d',
+    'sysvinit_suse': 'etc/init.d',
     'systemd': pkg_config_read('systemd', 'systemdsystemunitdir'),
-    'systemd.fsck-dropin': (
-        os.path.sep.join([pkg_config_read('systemd', 'systemdsystemunitdir'),
-                          'systemd-fsck@.service.d'])),
     'systemd.generators': pkg_config_read('systemd',
                                           'systemdsystemgeneratordir'),
     'upstart': 'etc/init/',
@@ -191,6 +189,8 @@ class InitsysInstallData(install):
             datakeys = [k for k in INITSYS_ROOTS
                         if k.partition(".")[0] == system]
             for k in datakeys:
+                if not INITSYS_FILES[k]:
+                    continue
                 self.distribution.data_files.append(
                     (INITSYS_ROOTS[k], INITSYS_FILES[k]))
         # Force that command to reinitalize (with new file list)
diff --git a/systemd/cloud-final.service.tmpl b/systemd/cloud-final.service.tmpl
index fc01b89..8207b18 100644
--- a/systemd/cloud-final.service.tmpl
+++ b/systemd/cloud-final.service.tmpl
@@ -4,9 +4,10 @@ Description=Execute cloud user/final scripts
 After=network-online.target cloud-config.service rc-local.service
 {% if variant in ["ubuntu", "unknown", "debian"] %}
 After=multi-user.target
+Before=apt-daily.service
 {% endif %}
 Wants=network-online.target cloud-config.service
-Before=apt-daily.service
+
 
 [Service]
 Type=oneshot
@@ -14,6 +15,7 @@ ExecStart=/usr/bin/cloud-init modules --mode=final
 RemainAfterExit=yes
 TimeoutSec=0
 KillMode=process
+TasksMax=infinity
 
 # Output needs to appear in instance console output
 StandardOutput=journal+console
diff --git a/systemd/cloud-init-local.service.tmpl b/systemd/cloud-init-local.service.tmpl
index ff9c644..bf6b296 100644
--- a/systemd/cloud-init-local.service.tmpl
+++ b/systemd/cloud-init-local.service.tmpl
@@ -13,6 +13,12 @@ Before=shutdown.target
 Before=sysinit.target
 Conflicts=shutdown.target
 {% endif %}
+{% if variant in ["suse"] %}
+# Other distros use Before=sysinit.target. There is not a clearly identified
+# reason for usage of basic.target instead.
+Before=basic.target
+Conflicts=shutdown.target
+{% endif %}
 RequiresMountsFor=/var/lib/cloud
 
 [Service]
diff --git a/systemd/cloud-init.service.tmpl b/systemd/cloud-init.service.tmpl
index 2c71889..b92e8ab 100644
--- a/systemd/cloud-init.service.tmpl
+++ b/systemd/cloud-init.service.tmpl
@@ -13,6 +13,13 @@ After=networking.service
 {% if variant in ["centos", "fedora", "redhat"] %}
 After=network.service
 {% endif %}
+{% if variant in ["suse"] %}
+Requires=wicked.service
+After=wicked.service
+# setting hostname via hostnamectl depends on dbus, which otherwise
+# would not be guaranteed at this point.
+After=dbus.service
+{% endif %}
 Before=network-online.target
 Before=sshd-keygen.service
 Before=sshd.service
@@ -20,6 +27,9 @@ Before=sshd.service
 Before=sysinit.target
 Conflicts=shutdown.target
 {% endif %}
+{% if variant in ["suse"] %}
+Conflicts=shutdown.target
+{% endif %}
 Before=systemd-user-sessions.service
 
 [Service]
diff --git a/systemd/systemd-fsck@.service.d/cloud-init.conf b/systemd/systemd-fsck@.service.d/cloud-init.conf
deleted file mode 100644
index 0bfa465..0000000
--- a/systemd/systemd-fsck@.service.d/cloud-init.conf
+++ /dev/null
@@ -1,2 +0,0 @@
-[Unit]
-After=cloud-init.service
diff --git a/sysvinit/suse/cloud-config b/sysvinit/suse/cloud-config
new file mode 100644
index 0000000..75b8151
--- /dev/null
+++ b/sysvinit/suse/cloud-config
@@ -0,0 +1,113 @@
+#!/bin/sh
+# Copyright (C) 2012 Yahoo! Inc.
+#
+# Author: Joshua Harlow <harlowja@xxxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+# See: http://wiki.debian.org/LSBInitScripts
+# See: http://tiny.cc/czvbgw
+# See: http://www.novell.com/coolsolutions/feature/15380.html
+# Also based on dhcpd in RHEL (for comparison)
+
+### BEGIN INIT INFO
+# Provides:          cloud-config
+# Required-Start:    cloud-init cloud-init-local
+# Should-Start:      $time
+# Required-Stop:     $null
+# Should-Stop:       $null
+# Default-Start:     2 3 5
+# Default-Stop:      0 1 6
+# Short-Description: The config cloud-init job
+# Description:       Start cloud-init and runs the config phase
+#	and any associated config modules as desired.
+### END INIT INFO
+
+# Return values acc. to LSB for all commands but status:
+# 0	  - success
+# 1       - generic or unspecified error
+# 2       - invalid or excess argument(s)
+# 3       - unimplemented feature (e.g. "reload")
+# 4       - user had insufficient privileges
+# 5       - program is not installed
+# 6       - program is not configured
+# 7       - program is not running
+# 8--199  - reserved (8--99 LSB, 100--149 distrib, 150--199 appl)
+# 
+# Note that starting an already running service, stopping
+# or restarting a not-running service as well as the restart
+# with force-reload (in case signaling is not supported) are
+# considered a success.
+
+RETVAL=0
+
+prog="cloud-init"
+cloud_init="/usr/bin/cloud-init"
+conf="/etc/cloud/cloud.cfg"
+
+# If there exist sysconfig/default variable override files use it...
+[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
+[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
+
+. /etc/rc.status
+rc_reset
+
+start() {
+    [ -x $cloud_init ] || return 5
+    [ -f $conf ] || return 6
+
+    echo -n $"Starting $prog: "
+    $cloud_init $CLOUDINITARGS modules --mode config
+    RETVAL=$?
+    return $RETVAL
+}
+
+stop() {
+    echo -n $"Shutting down $prog: "
+    # No-op
+    RETVAL=7
+    return $RETVAL
+}
+
+case "$1" in
+    start)
+        start
+        RETVAL=$?
+	;;
+    stop)
+        stop
+        RETVAL=$?
+	;;
+    restart|try-restart|condrestart)
+        ## Stop the service and regardless of whether it was
+        ## running or not, start it again.
+        # 
+        ## Note: try-restart is now part of LSB (as of 1.9).
+        ## RH has a similar command named condrestart.
+        start
+        RETVAL=$?
+	;;
+    reload|force-reload)
+        # It does not support reload
+        RETVAL=3
+	;;
+    status)
+        echo -n $"Checking for service $prog:"
+        # Return value is slightly different for the status command:
+        # 0 - service up and running
+        # 1 - service dead, but /var/run/  pid  file exists
+        # 2 - service dead, but /var/lock/ lock file exists
+        # 3 - service not running (unused)
+        # 4 - service status unknown :-(
+        # 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.)
+        RETVAL=3
+	;;
+    *)
+        echo "Usage: $0 {start|stop|status|try-restart|condrestart|restart|force-reload|reload}"
+        RETVAL=3
+	;;
+esac
+
+_rc_status=$RETVAL
+rc_status -v
+rc_exit
diff --git a/sysvinit/suse/cloud-final b/sysvinit/suse/cloud-final
new file mode 100644
index 0000000..25586e1
--- /dev/null
+++ b/sysvinit/suse/cloud-final
@@ -0,0 +1,113 @@
+#!/bin/sh
+# Copyright (C) 2012 Yahoo! Inc.
+#
+# Author: Joshua Harlow <harlowja@xxxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+# See: http://wiki.debian.org/LSBInitScripts
+# See: http://tiny.cc/czvbgw
+# See: http://www.novell.com/coolsolutions/feature/15380.html
+# Also based on dhcpd in RHEL (for comparison)
+
+### BEGIN INIT INFO
+# Provides:          cloud-final
+# Required-Start:    cloud-config
+# Should-Start:      $time
+# Required-Stop:     $null
+# Should-Stop:       $null
+# Default-Start:     2 3 5
+# Default-Stop:      0 1 6
+# Short-Description: The final cloud-init job
+# Description:       Start cloud-init and runs the final phase
+#	and any associated final modules as desired.
+### END INIT INFO
+
+# Return values acc. to LSB for all commands but status:
+# 0	  - success
+# 1       - generic or unspecified error
+# 2       - invalid or excess argument(s)
+# 3       - unimplemented feature (e.g. "reload")
+# 4       - user had insufficient privileges
+# 5       - program is not installed
+# 6       - program is not configured
+# 7       - program is not running
+# 8--199  - reserved (8--99 LSB, 100--149 distrib, 150--199 appl)
+# 
+# Note that starting an already running service, stopping
+# or restarting a not-running service as well as the restart
+# with force-reload (in case signaling is not supported) are
+# considered a success.
+
+RETVAL=0
+
+prog="cloud-init"
+cloud_init="/usr/bin/cloud-init"
+conf="/etc/cloud/cloud.cfg"
+
+# If there exist sysconfig/default variable override files use it...
+[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
+[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
+
+. /etc/rc.status
+rc_reset
+
+start() {
+    [ -x $cloud_init ] || return 5
+    [ -f $conf ] || return 6
+
+    echo -n $"Starting $prog: "
+    $cloud_init $CLOUDINITARGS modules --mode final
+    RETVAL=$?
+    return $RETVAL
+}
+
+stop() {
+    echo -n $"Shutting down $prog: "
+    # No-op
+    RETVAL=7
+    return $RETVAL
+}
+
+case "$1" in
+    start)
+        start
+        RETVAL=$?
+	;;
+    stop)
+        stop
+        RETVAL=$?
+	;;
+    restart|try-restart|condrestart)
+        ## Stop the service and regardless of whether it was
+        ## running or not, start it again.
+        # 
+        ## Note: try-restart is now part of LSB (as of 1.9).
+        ## RH has a similar command named condrestart.
+        start
+        RETVAL=$?
+	;;
+    reload|force-reload)
+        # It does not support reload
+        RETVAL=3
+	;;
+    status)
+        echo -n $"Checking for service $prog:"
+        # Return value is slightly different for the status command:
+        # 0 - service up and running
+        # 1 - service dead, but /var/run/  pid  file exists
+        # 2 - service dead, but /var/lock/ lock file exists
+        # 3 - service not running (unused)
+        # 4 - service status unknown :-(
+        # 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.)
+        RETVAL=3
+	;;
+    *)
+        echo "Usage: $0 {start|stop|status|try-restart|condrestart|restart|force-reload|reload}"
+        RETVAL=3
+	;;
+esac
+
+_rc_status=$RETVAL
+rc_status -v
+rc_exit
diff --git a/sysvinit/suse/cloud-init b/sysvinit/suse/cloud-init
new file mode 100644
index 0000000..67e8e6a
--- /dev/null
+++ b/sysvinit/suse/cloud-init
@@ -0,0 +1,114 @@
+#!/bin/sh
+# Copyright (C) 2012 Yahoo! Inc.
+#
+# Author: Joshua Harlow <harlowja@xxxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+# See: http://wiki.debian.org/LSBInitScripts
+# See: http://tiny.cc/czvbgw
+# See: http://www.novell.com/coolsolutions/feature/15380.html
+# Also based on dhcpd in RHEL (for comparison)
+
+### BEGIN INIT INFO
+# Provides:          cloud-init
+# Required-Start:    $local_fs $network $named $remote_fs cloud-init-local
+# Should-Start:      $time
+# Required-Stop:     $null
+# Should-Stop:       $null
+# Default-Start:     2 3 5
+# Default-Stop:      0 1 6
+# Short-Description: The initial cloud-init job (net and fs contingent)
+# Description:       Start cloud-init and runs the initialization phase
+#	and any associated initial modules as desired.
+### END INIT INFO
+
+# Return values acc. to LSB for all commands but status:
+# 0	  - success
+# 1       - generic or unspecified error
+# 2       - invalid or excess argument(s)
+# 3       - unimplemented feature (e.g. "reload")
+# 4       - user had insufficient privileges
+# 5       - program is not installed
+# 6       - program is not configured
+# 7       - program is not running
+# 8--199  - reserved (8--99 LSB, 100--149 distrib, 150--199 appl)
+# 
+# Note that starting an already running service, stopping
+# or restarting a not-running service as well as the restart
+# with force-reload (in case signaling is not supported) are
+# considered a success.
+
+RETVAL=0
+
+prog="cloud-init"
+cloud_init="/usr/bin/cloud-init"
+conf="/etc/cloud/cloud.cfg"
+
+# If there exist sysconfig/default variable override files use it...
+[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
+[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
+
+. /etc/rc.status
+rc_reset
+
+start() {
+    [ -x $cloud_init ] || return 5
+    [ -f $conf ] || return 6
+
+    echo -n $"Starting $prog: "
+    $cloud_init $CLOUDINITARGS init
+    RETVAL=$?
+    return $RETVAL
+}
+
+stop() {
+    echo -n $"Shutting down $prog: "
+    # No-op
+    RETVAL=7
+    return $RETVAL
+}
+
+case "$1" in
+    start)
+        start
+        RETVAL=$?
+	;;
+    stop)
+        stop
+        RETVAL=$?
+	;;
+    restart|try-restart|condrestart)
+        ## Stop the service and regardless of whether it was
+        ## running or not, start it again.
+        # 
+        ## Note: try-restart is now part of LSB (as of 1.9).
+        ## RH has a similar command named condrestart.
+        start
+        RETVAL=$?
+	;;
+    reload|force-reload)
+        # It does not support reload
+        RETVAL=3
+	;;
+    status)
+        echo -n $"Checking for service $prog:"
+        RETVAL=3
+        [ -e /root/.ssh/authorized_keys ] && RETVAL=0
+        # Return value is slightly different for the status command:
+        # 0 - service up and running
+        # 1 - service dead, but /var/run/  pid  file exists
+        # 2 - service dead, but /var/lock/ lock file exists
+        # 3 - service not running (unused)
+        # 4 - service status unknown :-(
+        # 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.)
+	;;
+    *)
+        echo "Usage: $0 {start|stop|status|try-restart|condrestart|restart|force-reload|reload}"
+        RETVAL=3
+	;;
+esac
+
+_rc_status=$RETVAL
+rc_status -v
+rc_exit
diff --git a/sysvinit/suse/cloud-init-local b/sysvinit/suse/cloud-init-local
new file mode 100644
index 0000000..1370d98
--- /dev/null
+++ b/sysvinit/suse/cloud-init-local
@@ -0,0 +1,113 @@
+#!/bin/sh
+# Copyright (C) 2012 Yahoo! Inc.
+#
+# Author: Joshua Harlow <harlowja@xxxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+# See: http://wiki.debian.org/LSBInitScripts
+# See: http://tiny.cc/czvbgw
+# See: http://www.novell.com/coolsolutions/feature/15380.html
+# Also based on dhcpd in RHEL (for comparison)
+
+### BEGIN INIT INFO
+# Provides:          cloud-init-local
+# Required-Start:    $local_fs $remote_fs
+# Should-Start:      $time
+# Required-Stop:     $null
+# Should-Stop:       $null
+# Default-Start:     2 3 5
+# Default-Stop:      0 1 6
+# Short-Description: The initial cloud-init job (local fs contingent)
+# Description:       Start cloud-init and runs the initialization phases
+#	and any associated initial modules as desired.
+### END INIT INFO
+
+# Return values acc. to LSB for all commands but status:
+# 0	  - success
+# 1       - generic or unspecified error
+# 2       - invalid or excess argument(s)
+# 3       - unimplemented feature (e.g. "reload")
+# 4       - user had insufficient privileges
+# 5       - program is not installed
+# 6       - program is not configured
+# 7       - program is not running
+# 8--199  - reserved (8--99 LSB, 100--149 distrib, 150--199 appl)
+# 
+# Note that starting an already running service, stopping
+# or restarting a not-running service as well as the restart
+# with force-reload (in case signaling is not supported) are
+# considered a success.
+
+RETVAL=0
+
+prog="cloud-init"
+cloud_init="/usr/bin/cloud-init"
+conf="/etc/cloud/cloud.cfg"
+
+# If there exist sysconfig/default variable override files use it...
+[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
+[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
+
+. /etc/rc.status
+rc_reset
+
+start() {
+    [ -x $cloud_init ] || return 5
+    [ -f $conf ] || return 6
+
+    echo -n $"Starting $prog: "
+    $cloud_init $CLOUDINITARGS init --local
+    RETVAL=$?
+    return $RETVAL
+}
+
+stop() {
+    echo -n $"Shutting down $prog: "
+    # No-op
+    RETVAL=7
+    return $RETVAL
+}
+
+case "$1" in
+    start)
+        start
+        RETVAL=$?
+	;;
+    stop)
+        stop
+        RETVAL=$?
+	;;
+    restart|try-restart|condrestart)
+        ## Stop the service and regardless of whether it was
+        ## running or not, start it again.
+        # 
+        ## Note: try-restart is now part of LSB (as of 1.9).
+        ## RH has a similar command named condrestart.
+        start
+        RETVAL=$?
+	;;
+    reload|force-reload)
+        # It does not support reload
+        RETVAL=3
+	;;
+    status)
+        echo -n $"Checking for service $prog:"
+        # Return value is slightly different for the status command:
+        # 0 - service up and running
+        # 1 - service dead, but /var/run/  pid  file exists
+        # 2 - service dead, but /var/lock/ lock file exists
+        # 3 - service not running (unused)
+        # 4 - service status unknown :-(
+        # 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.)
+        RETVAL=3
+	;;
+    *)
+        echo "Usage: $0 {start|stop|status|try-restart|condrestart|restart|force-reload|reload}"
+        RETVAL=3
+	;;
+esac
+
+_rc_status=$RETVAL
+rc_status -v
+rc_exit
diff --git a/templates/hosts.opensuse.tmpl b/templates/hosts.opensuse.tmpl
new file mode 100644
index 0000000..655da3f
--- /dev/null
+++ b/templates/hosts.opensuse.tmpl
@@ -0,0 +1,26 @@
+*
+    This file /etc/cloud/templates/hosts.opensuse.tmpl is only utilized
+    if enabled in cloud-config.  Specifically, in order to enable it
+    you need to add the following to config:
+      manage_etc_hosts: True
+*#
+# Your system has configured 'manage_etc_hosts' as True.
+# As a result, if you wish for changes to this file to persist
+# then you will need to either
+# a.) make changes to the master file in
+#     /etc/cloud/templates/hosts.opensuse.tmpl
+# b.) change or remove the value of 'manage_etc_hosts' in
+#     /etc/cloud/cloud.cfg or cloud-config from user-data
+#
+# The following lines are desirable for IPv4 capable hosts
+127.0.0.1 localhost
+
+# The following lines are desirable for IPv6 capable hosts
+::1 localhost ipv6-localhost ipv6-loopback
+fe00::0 ipv6-localnet
+
+ff00::0 ipv6-mcastprefix
+ff02::1 ipv6-allnodes
+ff02::2 ipv6-allrouters
+ff02::3 ipv6-allhosts
+
diff --git a/templates/hosts.suse.tmpl b/templates/hosts.suse.tmpl
index 399ec9b..b608269 100644
--- a/templates/hosts.suse.tmpl
+++ b/templates/hosts.suse.tmpl
@@ -14,12 +14,9 @@ you need to add the following to config:
 #
 # The following lines are desirable for IPv4 capable hosts
 127.0.0.1 localhost
-127.0.0.1 {{fqdn}} {{hostname}}
-
 
 # The following lines are desirable for IPv6 capable hosts
 ::1 localhost ipv6-localhost ipv6-loopback
-::1 {{fqdn}} {{hostname}}
 fe00::0 ipv6-localnet
 
 ff00::0 ipv6-mcastprefix
diff --git a/templates/sources.list.debian.tmpl b/templates/sources.list.debian.tmpl
index d64ace4..e7ef9ed 100644
--- a/templates/sources.list.debian.tmpl
+++ b/templates/sources.list.debian.tmpl
@@ -10,15 +10,15 @@
 
 # See http://www.debian.org/releases/stable/i386/release-notes/ch-upgrading.html
 # for how to upgrade to newer versions of the distribution.
-deb {{mirror}} {{codename}} main contrib non-free
-deb-src {{mirror}} {{codename}} main contrib non-free
+deb {{mirror}} {{codename}} main
+deb-src {{mirror}} {{codename}} main
 
 ## Major bug fix updates produced after the final release of the
 ## distribution.
-deb {{security}} {{codename}}/updates main contrib non-free
-deb-src {{security}} {{codename}}/updates main contrib non-free
-deb {{mirror}} {{codename}}-updates main contrib non-free
-deb-src {{mirror}} {{codename}}-updates main contrib non-free
+deb {{security}} {{codename}}/updates main
+deb-src {{security}} {{codename}}/updates main
+deb {{mirror}} {{codename}}-updates main
+deb-src {{mirror}} {{codename}}-updates main
 
 ## Uncomment the following two lines to add software from the 'backports'
 ## repository.
@@ -26,5 +26,5 @@ deb-src {{mirror}} {{codename}}-updates main contrib non-free
 ## N.B. software from this repository may not have been tested as
 ## extensively as that contained in the main release, although it includes
 ## newer versions of some applications which may provide useful features.
-deb {{mirror}} {{codename}}-backports main contrib non-free
-deb-src {{mirror}} {{codename}}-backports main contrib non-free
+deb {{mirror}} {{codename}}-backports main
+deb-src {{mirror}} {{codename}}-backports main
diff --git a/templates/timesyncd.conf.tmpl b/templates/timesyncd.conf.tmpl
new file mode 100644
index 0000000..6b98301
--- /dev/null
+++ b/templates/timesyncd.conf.tmpl
@@ -0,0 +1,8 @@
+## template:jinja
+# cloud-init generated file
+# See timesyncd.conf(5) for details.
+
+[Time]
+{% if servers or pools -%}
+NTP={% for host in servers|list + pools|list %}{{ host }} {% endfor -%}
+{% endif -%}
diff --git a/tests/cloud_tests/__init__.py b/tests/cloud_tests/__init__.py
index 07148c1..98c1d6c 100644
--- a/tests/cloud_tests/__init__.py
+++ b/tests/cloud_tests/__init__.py
@@ -7,7 +7,7 @@ import os
 
 BASE_DIR = os.path.dirname(os.path.abspath(__file__))
 TESTCASES_DIR = os.path.join(BASE_DIR, 'testcases')
-TEST_CONF_DIR = os.path.join(BASE_DIR, 'configs')
+TEST_CONF_DIR = os.path.join(BASE_DIR, 'testcases')
 TREE_BASE = os.sep.join(BASE_DIR.split(os.sep)[:-2])
 
 
diff --git a/tests/cloud_tests/__main__.py b/tests/cloud_tests/__main__.py
index 260ddb3..7ee29ca 100644
--- a/tests/cloud_tests/__main__.py
+++ b/tests/cloud_tests/__main__.py
@@ -4,6 +4,7 @@
 
 import argparse
 import logging
+import os
 import sys
 
 from tests.cloud_tests import args, bddeb, collect, manage, run_funcs, verify
@@ -50,7 +51,7 @@ def main():
             return -1
 
     # run handler
-    LOG.debug('running with args: %s\n', parsed)
+    LOG.debug('running with args: %s', parsed)
     return {
         'bddeb': bddeb.bddeb,
         'collect': collect.collect,
@@ -63,6 +64,8 @@ def main():
 
 
 if __name__ == "__main__":
+    if os.geteuid() == 0:
+        sys.exit('Do not run as root')
     sys.exit(main())
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/args.py b/tests/cloud_tests/args.py
index 369d60d..c6c1877 100644
--- a/tests/cloud_tests/args.py
+++ b/tests/cloud_tests/args.py
@@ -170,9 +170,9 @@ def normalize_collect_args(args):
     @param args: parsed args
     @return_value: updated args, or None if errors occurred
     """
-    # platform should default to all supported
+    # platform should default to lxd
     if len(args.platform) == 0:
-        args.platform = config.ENABLED_PLATFORMS
+        args.platform = ['lxd']
     args.platform = util.sorted_unique(args.platform)
 
     # os name should default to all enabled
diff --git a/tests/cloud_tests/bddeb.py b/tests/cloud_tests/bddeb.py
index 53dbf74..fba8a0c 100644
--- a/tests/cloud_tests/bddeb.py
+++ b/tests/cloud_tests/bddeb.py
@@ -11,7 +11,7 @@ from tests.cloud_tests import (config, LOG)
 from tests.cloud_tests import (platforms, images, snapshots, instances)
 from tests.cloud_tests.stage import (PlatformComponent, run_stage, run_single)
 
-build_deps = ['devscripts', 'equivs', 'git', 'tar']
+pre_reqs = ['devscripts', 'equivs', 'git', 'tar']
 
 
 def _out(cmd_res):
@@ -26,13 +26,9 @@ def build_deb(args, instance):
     @return_value: tuple of results and fail count
     """
     # update remote system package list and install build deps
-    LOG.debug('installing build deps')
-    pkgs = ' '.join(build_deps)
-    cmd = 'apt-get update && apt-get install --yes {}'.format(pkgs)
-    instance.execute(['/bin/sh', '-c', cmd])
-    # TODO Remove this call once we have a ci-deps Makefile target
-    instance.execute(['mk-build-deps', '--install', '-t',
-                      'apt-get --no-install-recommends --yes', 'cloud-init'])
+    LOG.debug('installing pre-reqs')
+    pkgs = ' '.join(pre_reqs)
+    instance.execute('apt-get update && apt-get install --yes {}'.format(pkgs))
 
     # local tmpfile that must be deleted
     local_tarball = tempfile.NamedTemporaryFile().name
@@ -40,7 +36,7 @@ def build_deb(args, instance):
     # paths to use in remote system
     output_link = '/root/cloud-init_all.deb'
     remote_tarball = _out(instance.execute(['mktemp']))
-    extract_dir = _out(instance.execute(['mktemp', '--directory']))
+    extract_dir = '/root'
     bddeb_path = os.path.join(extract_dir, 'packages', 'bddeb')
     git_env = {'GIT_DIR': os.path.join(extract_dir, '.git'),
                'GIT_WORK_TREE': extract_dir}
@@ -56,6 +52,11 @@ def build_deb(args, instance):
     instance.execute(['git', 'commit', '-a', '-m', 'tmp', '--allow-empty'],
                      env=git_env)
 
+    LOG.debug('installing deps')
+    deps_path = os.path.join(extract_dir, 'tools', 'read-dependencies')
+    instance.execute([deps_path, '--install', '--test-distro',
+                      '--distro', 'ubuntu', '--python-version', '3'])
+
     LOG.debug('building deb in remote system at: %s', output_link)
     bddeb_args = args.bddeb_args.split() if args.bddeb_args else []
     instance.execute([bddeb_path, '-d'] + bddeb_args, env=git_env)
diff --git a/tests/cloud_tests/collect.py b/tests/cloud_tests/collect.py
index b44e8bd..4a2422e 100644
--- a/tests/cloud_tests/collect.py
+++ b/tests/cloud_tests/collect.py
@@ -120,6 +120,7 @@ def collect_image(args, platform, os_name):
     os_config = config.load_os_config(
         platform.platform_name, os_name, require_enabled=True,
         feature_overrides=args.feature_override)
+    LOG.debug('os config: %s', os_config)
     component = PlatformComponent(
         partial(images.get_image, platform, os_config))
 
@@ -144,6 +145,8 @@ def collect_platform(args, platform_name):
 
     platform_config = config.load_platform_config(
         platform_name, require_enabled=True)
+    platform_config['data_dir'] = args.data_dir
+    LOG.debug('platform config: %s', platform_config)
     component = PlatformComponent(
         partial(platforms.get_platform, platform_name, platform_config))
 
diff --git a/tests/cloud_tests/config.py b/tests/cloud_tests/config.py
index 4d5dc80..52fc2bd 100644
--- a/tests/cloud_tests/config.py
+++ b/tests/cloud_tests/config.py
@@ -112,6 +112,7 @@ def load_os_config(platform_name, os_name, require_enabled=False,
     feature_conf = main_conf['features']
     feature_groups = conf.get('feature_groups', [])
     overrides = merge_config(get(conf, 'features'), feature_overrides)
+    conf['arch'] = c_util.get_architecture()
     conf['features'] = merge_feature_groups(
         feature_conf, feature_groups, overrides)
 
diff --git a/tests/cloud_tests/images/nocloudkvm.py b/tests/cloud_tests/images/nocloudkvm.py
new file mode 100644
index 0000000..a7af0e5
--- /dev/null
+++ b/tests/cloud_tests/images/nocloudkvm.py
@@ -0,0 +1,88 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""NoCloud KVM Image Base Class."""
+
+from tests.cloud_tests.images import base
+from tests.cloud_tests.snapshots import nocloudkvm as nocloud_kvm_snapshot
+
+
+class NoCloudKVMImage(base.Image):
+    """NoCloud KVM backed image."""
+
+    platform_name = "nocloud-kvm"
+
+    def __init__(self, platform, config, img_path):
+        """Set up image.
+
+        @param platform: platform object
+        @param config: image configuration
+        @param img_path: path to the image
+        """
+        self.modified = False
+        self._instance = None
+        self._img_path = img_path
+
+        super(NoCloudKVMImage, self).__init__(platform, config)
+
+    @property
+    def instance(self):
+        """Returns an instance of an image."""
+        if not self._instance:
+            if not self._img_path:
+                raise RuntimeError()
+
+            self._instance = self.platform.create_image(
+                self.properties, self.config, self.features, self._img_path,
+                image_desc=str(self), use_desc='image-modification')
+        return self._instance
+
+    @property
+    def properties(self):
+        """Dictionary containing: 'arch', 'os', 'version', 'release'."""
+        return {
+            'arch': self.config['arch'],
+            'os': self.config['family'],
+            'release': self.config['release'],
+            'version': self.config['version'],
+        }
+
+    def execute(self, *args, **kwargs):
+        """Execute command in image, modifying image."""
+        return self.instance.execute(*args, **kwargs)
+
+    def push_file(self, local_path, remote_path):
+        """Copy file at 'local_path' to instance at 'remote_path'."""
+        return self.instance.push_file(local_path, remote_path)
+
+    def run_script(self, *args, **kwargs):
+        """Run script in image, modifying image.
+
+        @return_value: script output
+        """
+        return self.instance.run_script(*args, **kwargs)
+
+    def snapshot(self):
+        """Create snapshot of image, block until done."""
+        if not self._img_path:
+            raise RuntimeError()
+
+        instance = self.platform.create_image(
+            self.properties, self.config, self.features,
+            self._img_path, image_desc=str(self), use_desc='snapshot')
+
+        return nocloud_kvm_snapshot.NoCloudKVMSnapshot(
+            self.platform, self.properties, self.config,
+            self.features, instance)
+
+    def destroy(self):
+        """Unset path to signal image is no longer used.
+
+        The removal of the images and all other items is handled by the
+        framework. In some cases we want to keep the images, so let the
+        framework decide whether to keep or destroy everything.
+        """
+        self._img_path = None
+        self._instance.destroy()
+        super(NoCloudKVMImage, self).destroy()
+
+# vi: ts=4 expandtab
diff --git a/tests/cloud_tests/instances/base.py b/tests/cloud_tests/instances/base.py
index 959e9cc..9bdda60 100644
--- a/tests/cloud_tests/instances/base.py
+++ b/tests/cloud_tests/instances/base.py
@@ -23,7 +23,7 @@ class Instance(object):
         self.config = config
         self.features = features
 
-    def execute(self, command, stdout=None, stderr=None, env={},
+    def execute(self, command, stdout=None, stderr=None, env=None,
                 rcs=None, description=None):
         """Execute command in instance, recording output, error and exit code.
 
@@ -31,6 +31,8 @@ class Instance(object):
         target filesystem being available at /.
 
         @param command: the command to execute as root inside the image
+            if command is a string, then it will be executed as:
+            ['sh', '-c', command]
         @param stdout, stderr: file handles to write output and error to
         @param env: environment variables
         @param rcs: allowed return codes from command
@@ -88,7 +90,7 @@ class Instance(object):
             return self.execute(
                 ['/bin/bash', script_path], rcs=rcs, description=description)
         finally:
-            self.execute(['rm', script_path], rcs=rcs)
+            self.execute(['rm', '-f', script_path], rcs=rcs)
 
     def tmpfile(self):
         """Get a tmp file in the target.
@@ -137,9 +139,9 @@ class Instance(object):
             tests.append(self.config['cloud_init_ready_script'])
 
         formatted_tests = ' && '.join(clean_test(t) for t in tests)
-        test_cmd = ('for ((i=0;i<{time};i++)); do {test} && exit 0; sleep 1; '
-                    'done; exit 1;').format(time=time, test=formatted_tests)
-        cmd = ['/bin/bash', '-c', test_cmd]
+        cmd = ('i=0; while [ $i -lt {time} ] && i=$(($i+1)); do {test} && '
+               'exit 0; sleep 1; done; exit 1').format(time=time,
+                                                       test=formatted_tests)
 
         if self.execute(cmd, rcs=(0, 1))[-1] != 0:
             raise OSError('timeout: after {}s system not started'.format(time))
diff --git a/tests/cloud_tests/instances/lxd.py b/tests/cloud_tests/instances/lxd.py
index b9c2cc6..a43918c 100644
--- a/tests/cloud_tests/instances/lxd.py
+++ b/tests/cloud_tests/instances/lxd.py
@@ -31,7 +31,7 @@ class LXDInstance(base.Instance):
         self._pylxd_container.sync()
         return self._pylxd_container
 
-    def execute(self, command, stdout=None, stderr=None, env={},
+    def execute(self, command, stdout=None, stderr=None, env=None,
                 rcs=None, description=None):
         """Execute command in instance, recording output, error and exit code.
 
@@ -39,6 +39,8 @@ class LXDInstance(base.Instance):
         target filesystem being available at /.
 
         @param command: the command to execute as root inside the image
+            if command is a string, then it will be executed as:
+            ['sh', '-c', command]
         @param stdout: file handler to write output
         @param stderr: file handler to write error
         @param env: environment variables
@@ -46,6 +48,12 @@ class LXDInstance(base.Instance):
         @param description: purpose of command
         @return_value: tuple containing stdout data, stderr data, exit code
         """
+        if env is None:
+            env = {}
+
+        if isinstance(command, str):
+            command = ['sh', '-c', command]
+
         # ensure instance is running and execute the command
         self.start()
         res = self.pylxd_container.execute(command, environment=env)
diff --git a/tests/cloud_tests/instances/nocloudkvm.py b/tests/cloud_tests/instances/nocloudkvm.py
new file mode 100644
index 0000000..8a0e531
--- /dev/null
+++ b/tests/cloud_tests/instances/nocloudkvm.py
@@ -0,0 +1,217 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Base NoCloud KVM instance."""
+
+import os
+import paramiko
+import socket
+import subprocess
+import time
+
+from cloudinit import util as c_util
+from tests.cloud_tests.instances import base
+from tests.cloud_tests import util
+
+
+class NoCloudKVMInstance(base.Instance):
+    """NoCloud KVM backed instance."""
+
+    platform_name = "nocloud-kvm"
+
+    def __init__(self, platform, name, properties, config, features,
+                 user_data, meta_data):
+        """Set up instance.
+
+        @param platform: platform object
+        @param name: image path
+        @param properties: dictionary of properties
+        @param config: dictionary of configuration values
+        @param features: dictionary of supported feature flags
+        """
+        self.user_data = user_data
+        self.meta_data = meta_data
+        self.ssh_key_file = os.path.join(platform.config['data_dir'],
+                                         platform.config['private_key'])
+        self.ssh_port = None
+        self.pid = None
+        self.pid_file = None
+
+        super(NoCloudKVMInstance, self).__init__(
+            platform, name, properties, config, features)
+
+    def destroy(self):
+        """Clean up instance."""
+        if self.pid:
+            try:
+                c_util.subp(['kill', '-9', self.pid])
+            except util.ProcessExectuionError:
+                pass
+
+        if self.pid_file:
+            os.remove(self.pid_file)
+
+        self.pid = None
+        super(NoCloudKVMInstance, self).destroy()
+
+    def execute(self, command, stdout=None, stderr=None, env=None,
+                rcs=None, description=None):
+        """Execute command in instance.
+
+        Assumes functional networking and execution as root with the
+        target filesystem being available at /.
+
+        @param command: the command to execute as root inside the image
+            if command is a string, then it will be executed as:
+            ['sh', '-c', command]
+        @param stdout, stderr: file handles to write output and error to
+        @param env: environment variables
+        @param rcs: allowed return codes from command
+        @param description: purpose of command
+        @return_value: tuple containing stdout data, stderr data, exit code
+        """
+        if env is None:
+            env = {}
+
+        if isinstance(command, str):
+            command = ['sh', '-c', command]
+
+        if self.pid:
+            return self.ssh(command)
+        else:
+            return self.mount_image_callback(command) + (0,)
+
+    def mount_image_callback(self, cmd):
+        """Run mount-image-callback."""
+        out, err = c_util.subp(['sudo', 'mount-image-callback',
+                                '--system-mounts', '--system-resolvconf',
+                                self.name, '--', 'chroot',
+                                '_MOUNTPOINT_'] + cmd)
+
+        return out, err
+
+    def generate_seed(self, tmpdir):
+        """Generate nocloud seed from user-data"""
+        seed_file = os.path.join(tmpdir, '%s_seed.img' % self.name)
+        user_data_file = os.path.join(tmpdir, '%s_user_data' % self.name)
+
+        with open(user_data_file, "w") as ud_file:
+            ud_file.write(self.user_data)
+
+        c_util.subp(['cloud-localds', seed_file, user_data_file])
+
+        return seed_file
+
+    def get_free_port(self):
+        """Get a free port assigned by the kernel."""
+        s = socket.socket()
+        s.bind(('', 0))
+        num = s.getsockname()[1]
+        s.close()
+        return num
+
+    def push_file(self, local_path, remote_path):
+        """Copy file at 'local_path' to instance at 'remote_path'.
+
+        If we have a pid then SSH is up, otherwise, use
+        mount-image-callback.
+
+        @param local_path: path on local instance
+        @param remote_path: path on remote instance
+        """
+        if self.pid:
+            super(NoCloudKVMInstance, self).push_file()
+        else:
+            local_file = open(local_path)
+            p = subprocess.Popen(['sudo', 'mount-image-callback',
+                                  '--system-mounts', '--system-resolvconf',
+                                  self.name, '--', 'chroot', '_MOUNTPOINT_',
+                                  '/bin/sh', '-c', 'cat - > %s' % remote_path],
+                                 stdin=local_file,
+                                 stdout=subprocess.PIPE,
+                                 stderr=subprocess.PIPE)
+            p.wait()
+
+    def sftp_put(self, path, data):
+        """SFTP put a file."""
+        client = self._ssh_connect()
+        sftp = client.open_sftp()
+
+        with sftp.open(path, 'w') as f:
+            f.write(data)
+
+        client.close()
+
+    def ssh(self, command):
+        """Run a command via SSH."""
+        client = self._ssh_connect()
+
+        try:
+            _, out, err = client.exec_command(util.shell_pack(command))
+        except paramiko.SSHException:
+            raise util.InTargetExecuteError('', '', -1, command, self.name)
+
+        exit = out.channel.recv_exit_status()
+        out = ''.join(out.readlines())
+        err = ''.join(err.readlines())
+        client.close()
+
+        return out, err, exit
+
+    def _ssh_connect(self, hostname='localhost', username='ubuntu',
+                     banner_timeout=120, retry_attempts=30):
+        """Connect via SSH."""
+        private_key = paramiko.RSAKey.from_private_key_file(self.ssh_key_file)
+        client = paramiko.SSHClient()
+        client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
+        while retry_attempts:
+            try:
+                client.connect(hostname=hostname, username=username,
+                               port=self.ssh_port, pkey=private_key,
+                               banner_timeout=banner_timeout)
+                return client
+            except (paramiko.SSHException, TypeError):
+                time.sleep(1)
+                retry_attempts = retry_attempts - 1
+
+        error_desc = 'Failed command to: %s@%s:%s' % (username, hostname,
+                                                      self.ssh_port)
+        raise util.InTargetExecuteError('', '', -1, 'ssh connect',
+                                        self.name, error_desc)
+
+    def start(self, wait=True, wait_for_cloud_init=False):
+        """Start instance."""
+        tmpdir = self.platform.config['data_dir']
+        seed = self.generate_seed(tmpdir)
+        self.pid_file = os.path.join(tmpdir, '%s.pid' % self.name)
+        self.ssh_port = self.get_free_port()
+
+        subprocess.Popen(['./tools/xkvm',
+                          '--disk', '%s,cache=unsafe' % self.name,
+                          '--disk', '%s,cache=unsafe' % seed,
+                          '--netdev',
+                          'user,hostfwd=tcp::%s-:22' % self.ssh_port,
+                          '--', '-pidfile', self.pid_file, '-vnc', 'none',
+                          '-m', '2G', '-smp', '2'],
+                         close_fds=True,
+                         stdin=subprocess.PIPE,
+                         stdout=subprocess.PIPE,
+                         stderr=subprocess.PIPE)
+
+        while not os.path.exists(self.pid_file):
+            time.sleep(1)
+
+        with open(self.pid_file, 'r') as pid_f:
+            self.pid = pid_f.readlines()[0].strip()
+
+        if wait:
+            self._wait_for_system(wait_for_cloud_init)
+
+    def write_data(self, remote_path, data):
+        """Write data to instance filesystem.
+
+        @param remote_path: path in instance
+        @param data: data to write, either str or bytes
+        """
+        self.sftp_put(remote_path, data)
+
+# vi: ts=4 expandtab
diff --git a/tests/cloud_tests/platforms.yaml b/tests/cloud_tests/platforms.yaml
index b91834a..fa4f845 100644
--- a/tests/cloud_tests/platforms.yaml
+++ b/tests/cloud_tests/platforms.yaml
@@ -59,6 +59,10 @@ platforms:
                 {{ config_get("user.user-data", properties.default) }}
             cloud-init-vendor.tpl: |
                 {{ config_get("user.vendor-data", properties.default) }}
+    nocloud-kvm:
+        enabled: true
+        private_key: id_rsa
+        public_key: id_rsa.pub
     ec2: {}
     azure: {}
 
diff --git a/tests/cloud_tests/platforms/__init__.py b/tests/cloud_tests/platforms/__init__.py
index 443f6d4..3490fe8 100644
--- a/tests/cloud_tests/platforms/__init__.py
+++ b/tests/cloud_tests/platforms/__init__.py
@@ -3,8 +3,10 @@
 """Main init."""
 
 from tests.cloud_tests.platforms import lxd
+from tests.cloud_tests.platforms import nocloudkvm
 
 PLATFORMS = {
+    'nocloud-kvm': nocloudkvm.NoCloudKVMPlatform,
     'lxd': lxd.LXDPlatform,
 }
 
diff --git a/tests/cloud_tests/platforms/nocloudkvm.py b/tests/cloud_tests/platforms/nocloudkvm.py
new file mode 100644
index 0000000..f1f8187
--- /dev/null
+++ b/tests/cloud_tests/platforms/nocloudkvm.py
@@ -0,0 +1,90 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Base NoCloud KVM platform."""
+import glob
+import os
+
+from simplestreams import filters
+from simplestreams import mirrors
+from simplestreams import objectstores
+from simplestreams import util as s_util
+
+from cloudinit import util as c_util
+from tests.cloud_tests.images import nocloudkvm as nocloud_kvm_image
+from tests.cloud_tests.instances import nocloudkvm as nocloud_kvm_instance
+from tests.cloud_tests.platforms import base
+from tests.cloud_tests import util
+
+
+class NoCloudKVMPlatform(base.Platform):
+    """NoCloud KVM test platform."""
+
+    platform_name = 'nocloud-kvm'
+
+    def get_image(self, img_conf):
+        """Get image using specified image configuration.
+
+        @param img_conf: configuration for image
+        @return_value: cloud_tests.images instance
+        """
+        (url, path) = s_util.path_from_mirror_url(img_conf['mirror_url'], None)
+
+        filter = filters.get_filters(['arch=%s' % c_util.get_architecture(),
+                                      'release=%s' % img_conf['release'],
+                                      'ftype=disk1.img'])
+        mirror_config = {'filters': filter,
+                         'keep_items': False,
+                         'max_items': 1,
+                         'checksumming_reader': True,
+                         'item_download': True
+                         }
+
+        def policy(content, path):
+            return s_util.read_signed(content, keyring=img_conf['keyring'])
+
+        smirror = mirrors.UrlMirrorReader(url, policy=policy)
+        tstore = objectstores.FileStore(img_conf['mirror_dir'])
+        tmirror = mirrors.ObjectFilterMirror(config=mirror_config,
+                                             objectstore=tstore)
+        tmirror.sync(smirror, path)
+
+        search_d = os.path.join(img_conf['mirror_dir'], '**',
+                                img_conf['release'], '**', '*.img')
+
+        images = []
+        for fname in glob.iglob(search_d, recursive=True):
+            images.append(fname)
+
+        if len(images) != 1:
+            raise Exception('No unique images found')
+
+        image = nocloud_kvm_image.NoCloudKVMImage(self, img_conf, images[0])
+        if img_conf.get('override_templates', False):
+            image.update_templates(self.config.get('template_overrides', {}),
+                                   self.config.get('template_files', {}))
+        return image
+
+    def create_image(self, properties, config, features,
+                     src_img_path, image_desc=None, use_desc=None,
+                     user_data=None, meta_data=None):
+        """Create an image
+
+        @param src_img_path: image path to launch from
+        @param properties: image properties
+        @param config: image configuration
+        @param features: image features
+        @param image_desc: description of image being launched
+        @param use_desc: description of container's use
+        @return_value: cloud_tests.instances instance
+        """
+        name = util.gen_instance_name(image_desc=image_desc, use_desc=use_desc)
+        img_path = os.path.join(self.config['data_dir'], name + '.qcow2')
+        c_util.subp(['qemu-img', 'create', '-f', 'qcow2',
+                    '-b', src_img_path, img_path])
+
+        return nocloud_kvm_instance.NoCloudKVMInstance(self, img_path,
+                                                       properties, config,
+                                                       features, user_data,
+                                                       meta_data)
+
+# vi: ts=4 expandtab
diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml
index c8dd142..ec7e2d5 100644
--- a/tests/cloud_tests/releases.yaml
+++ b/tests/cloud_tests/releases.yaml
@@ -27,7 +27,12 @@ default_release_config:
         # features groups and additional feature settings
         feature_groups: []
         features: {}
-
+    nocloud-kvm:
+        mirror_url: https://cloud-images.ubuntu.com/daily
+        mirror_dir: '/srv/citest/nocloud-kvm'
+        keyring: /usr/share/keyrings/ubuntu-cloudimage-keyring.gpg
+        setup_overrides: null
+        override_templates: false
     # lxd specific default configuration options
     lxd:
         # default sstreams server to use for lxd image retrieval
@@ -121,6 +126,9 @@ releases:
         # EOL: Jul 2018
         default:
             enabled: true
+            release: artful
+            version: 17.10
+            family: ubuntu
             feature_groups:
                 - base
                 - debian_base
@@ -134,6 +142,9 @@ releases:
         # EOL: Jan 2018
         default:
             enabled: true
+            release: zesty
+            version: 17.04
+            family: ubuntu
             feature_groups:
                 - base
                 - debian_base
@@ -147,6 +158,9 @@ releases:
         # EOL: Apr 2021
         default:
             enabled: true
+            release: xenial
+            version: 16.04
+            family: ubuntu
             feature_groups:
                 - base
                 - debian_base
@@ -160,6 +174,9 @@ releases:
         # EOL: Apr 2019
         default:
             enabled: true
+            release: trusty
+            version: 14.04
+            family: ubuntu
             feature_groups:
                 - base
                 - debian_base
diff --git a/tests/cloud_tests/setup_image.py b/tests/cloud_tests/setup_image.py
index 8053a09..6672ffb 100644
--- a/tests/cloud_tests/setup_image.py
+++ b/tests/cloud_tests/setup_image.py
@@ -5,6 +5,7 @@
 from functools import partial
 import os
 
+from cloudinit import util as c_util
 from tests.cloud_tests import LOG
 from tests.cloud_tests import stage, util
 
@@ -19,7 +20,7 @@ def installed_package_version(image, package, ensure_installed=True):
     """
     os_family = util.get_os_family(image.properties['os'])
     if os_family == 'debian':
-        cmd = ['dpkg-query', '-W', "--showformat='${Version}'", package]
+        cmd = ['dpkg-query', '-W', "--showformat=${Version}", package]
     elif os_family == 'redhat':
         cmd = ['rpm', '-q', '--queryformat', "'%{VERSION}'", package]
     else:
@@ -49,11 +50,11 @@ def install_deb(args, image):
     LOG.debug(msg)
     remote_path = os.path.join('/tmp', os.path.basename(args.deb))
     image.push_file(args.deb, remote_path)
-    cmd = 'dpkg -i {} || apt-get install --yes -f'.format(remote_path)
-    image.execute(['/bin/sh', '-c', cmd], description=msg)
+    cmd = 'dpkg -i {}; apt-get install --yes -f'.format(remote_path)
+    image.execute(cmd, description=msg)
 
     # check installed deb version matches package
-    fmt = ['-W', "--showformat='${Version}'"]
+    fmt = ['-W', "--showformat=${Version}"]
     (out, err, exit) = image.execute(['dpkg-deb'] + fmt + [remote_path])
     expected_version = out.strip()
     found_version = installed_package_version(image, 'cloud-init')
@@ -113,7 +114,7 @@ def upgrade(args, image):
 
     msg = 'upgrading cloud-init'
     LOG.debug(msg)
-    image.execute(['/bin/sh', '-c', cmd], description=msg)
+    image.execute(cmd, description=msg)
 
 
 def upgrade_full(args, image):
@@ -134,7 +135,7 @@ def upgrade_full(args, image):
 
     msg = 'full system upgrade'
     LOG.debug(msg)
-    image.execute(['/bin/sh', '-c', cmd], description=msg)
+    image.execute(cmd, description=msg)
 
 
 def run_script(args, image):
@@ -165,7 +166,7 @@ def enable_ppa(args, image):
     msg = 'enable ppa: "{}" in target'.format(ppa)
     LOG.debug(msg)
     cmd = 'add-apt-repository --yes {} && apt-get update'.format(ppa)
-    image.execute(['/bin/sh', '-c', cmd], description=msg)
+    image.execute(cmd, description=msg)
 
 
 def enable_repo(args, image):
@@ -188,7 +189,21 @@ def enable_repo(args, image):
 
     msg = 'enable repo: "{}" in target'.format(args.repo)
     LOG.debug(msg)
-    image.execute(['/bin/sh', '-c', cmd], description=msg)
+    image.execute(cmd, description=msg)
+
+
+def generate_ssh_keys(data_dir):
+    """Generate SSH keys to be used with image."""
+    LOG.info('generating SSH keys')
+    filename = os.path.join(data_dir, 'id_rsa')
+
+    if os.path.exists(filename):
+        c_util.del_file(filename)
+
+    c_util.subp(['ssh-keygen', '-t', 'rsa', '-b', '4096',
+                 '-f', filename, '-P', '',
+                 '-C', 'ubuntu@cloud_test'],
+                capture=True)
 
 
 def setup_image(args, image):
@@ -226,6 +241,7 @@ def setup_image(args, image):
         'set up for {}'.format(image), calls, continue_after_error=False)
     LOG.debug('after setup complete, installed cloud-init version is: %s',
               installed_package_version(image, 'cloud-init'))
+    generate_ssh_keys(args.data_dir)
     return res
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/snapshots/nocloudkvm.py b/tests/cloud_tests/snapshots/nocloudkvm.py
new file mode 100644
index 0000000..0999834
--- /dev/null
+++ b/tests/cloud_tests/snapshots/nocloudkvm.py
@@ -0,0 +1,74 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Base NoCloud KVM snapshot."""
+import os
+
+from tests.cloud_tests.snapshots import base
+
+
+class NoCloudKVMSnapshot(base.Snapshot):
+    """NoCloud KVM image copy backed snapshot."""
+
+    platform_name = "nocloud-kvm"
+
+    def __init__(self, platform, properties, config, features,
+                 instance):
+        """Set up snapshot.
+
+        @param platform: platform object
+        @param properties: image properties
+        @param config: image config
+        @param features: supported feature flags
+        """
+        self.instance = instance
+
+        super(NoCloudKVMSnapshot, self).__init__(
+            platform, properties, config, features)
+
+    def launch(self, user_data, meta_data=None, block=True, start=True,
+               use_desc=None):
+        """Launch instance.
+
+        @param user_data: user-data for the instance
+        @param instance_id: instance-id for the instance
+        @param block: wait until instance is created
+        @param start: start instance and wait until fully started
+        @param use_desc: description of snapshot instance use
+        @return_value: an Instance
+        """
+        key_file = os.path.join(self.platform.config['data_dir'],
+                                self.platform.config['public_key'])
+        user_data = self.inject_ssh_key(user_data, key_file)
+
+        instance = self.platform.create_image(
+            self.properties, self.config, self.features,
+            self.instance.name, image_desc=str(self), use_desc=use_desc,
+            user_data=user_data, meta_data=meta_data)
+
+        if start:
+            instance.start()
+
+        return instance
+
+    def inject_ssh_key(self, user_data, key_file):
+        """Inject the authorized key into the user_data."""
+        with open(key_file) as f:
+            value = f.read()
+
+        key = 'ssh_authorized_keys:'
+        value = '  - %s' % value.strip()
+        user_data = user_data.split('\n')
+        if key in user_data:
+            user_data.insert(user_data.index(key) + 1, '%s' % value)
+        else:
+            user_data.insert(-1, '%s' % key)
+            user_data.insert(-1, '%s' % value)
+
+        return '\n'.join(user_data)
+
+    def destroy(self):
+        """Clean up snapshot data."""
+        self.instance.destroy()
+        super(NoCloudKVMSnapshot, self).destroy()
+
+# vi: ts=4 expandtab
diff --git a/tests/cloud_tests/configs/bugs/README.md b/tests/cloud_tests/testcases/bugs/README.md
index 09ce076..09ce076 100644
--- a/tests/cloud_tests/configs/bugs/README.md
+++ b/tests/cloud_tests/testcases/bugs/README.md
diff --git a/tests/cloud_tests/configs/bugs/lp1511485.yaml b/tests/cloud_tests/testcases/bugs/lp1511485.yaml
index ebf9763..ebf9763 100644
--- a/tests/cloud_tests/configs/bugs/lp1511485.yaml
+++ b/tests/cloud_tests/testcases/bugs/lp1511485.yaml
diff --git a/tests/cloud_tests/configs/bugs/lp1611074.yaml b/tests/cloud_tests/testcases/bugs/lp1611074.yaml
index 960679d..960679d 100644
--- a/tests/cloud_tests/configs/bugs/lp1611074.yaml
+++ b/tests/cloud_tests/testcases/bugs/lp1611074.yaml
diff --git a/tests/cloud_tests/configs/bugs/lp1628337.yaml b/tests/cloud_tests/testcases/bugs/lp1628337.yaml
index e39b3cd..e39b3cd 100644
--- a/tests/cloud_tests/configs/bugs/lp1628337.yaml
+++ b/tests/cloud_tests/testcases/bugs/lp1628337.yaml
diff --git a/tests/cloud_tests/configs/examples/README.md b/tests/cloud_tests/testcases/examples/README.md
index 110a223..110a223 100644
--- a/tests/cloud_tests/configs/examples/README.md
+++ b/tests/cloud_tests/testcases/examples/README.md
diff --git a/tests/cloud_tests/configs/examples/TODO.md b/tests/cloud_tests/testcases/examples/TODO.md
index 8db0e98..8db0e98 100644
--- a/tests/cloud_tests/configs/examples/TODO.md
+++ b/tests/cloud_tests/testcases/examples/TODO.md
diff --git a/tests/cloud_tests/configs/examples/add_apt_repositories.yaml b/tests/cloud_tests/testcases/examples/add_apt_repositories.yaml
index 4b8575f..4b8575f 100644
--- a/tests/cloud_tests/configs/examples/add_apt_repositories.yaml
+++ b/tests/cloud_tests/testcases/examples/add_apt_repositories.yaml
diff --git a/tests/cloud_tests/configs/examples/alter_completion_message.yaml b/tests/cloud_tests/testcases/examples/alter_completion_message.yaml
index 9e154f8..9e154f8 100644
--- a/tests/cloud_tests/configs/examples/alter_completion_message.yaml
+++ b/tests/cloud_tests/testcases/examples/alter_completion_message.yaml
diff --git a/tests/cloud_tests/configs/examples/configure_instance_trusted_ca_certificates.yaml b/tests/cloud_tests/testcases/examples/configure_instance_trusted_ca_certificates.yaml
index ad32b08..ad32b08 100644
--- a/tests/cloud_tests/configs/examples/configure_instance_trusted_ca_certificates.yaml
+++ b/tests/cloud_tests/testcases/examples/configure_instance_trusted_ca_certificates.yaml
diff --git a/tests/cloud_tests/configs/examples/configure_instances_ssh_keys.yaml b/tests/cloud_tests/testcases/examples/configure_instances_ssh_keys.yaml
index f3eaf3c..f3eaf3c 100644
--- a/tests/cloud_tests/configs/examples/configure_instances_ssh_keys.yaml
+++ b/tests/cloud_tests/testcases/examples/configure_instances_ssh_keys.yaml
diff --git a/tests/cloud_tests/configs/examples/including_user_groups.yaml b/tests/cloud_tests/testcases/examples/including_user_groups.yaml
index 0aa7ad2..0aa7ad2 100644
--- a/tests/cloud_tests/configs/examples/including_user_groups.yaml
+++ b/tests/cloud_tests/testcases/examples/including_user_groups.yaml
diff --git a/tests/cloud_tests/configs/examples/install_arbitrary_packages.yaml b/tests/cloud_tests/testcases/examples/install_arbitrary_packages.yaml
index d398022..d398022 100644
--- a/tests/cloud_tests/configs/examples/install_arbitrary_packages.yaml
+++ b/tests/cloud_tests/testcases/examples/install_arbitrary_packages.yaml
diff --git a/tests/cloud_tests/configs/examples/install_run_chef_recipes.yaml b/tests/cloud_tests/testcases/examples/install_run_chef_recipes.yaml
index 0bec305..0bec305 100644
--- a/tests/cloud_tests/configs/examples/install_run_chef_recipes.yaml
+++ b/tests/cloud_tests/testcases/examples/install_run_chef_recipes.yaml
diff --git a/tests/cloud_tests/configs/examples/run_apt_upgrade.yaml b/tests/cloud_tests/testcases/examples/run_apt_upgrade.yaml
index 2b7eae4..2b7eae4 100644
--- a/tests/cloud_tests/configs/examples/run_apt_upgrade.yaml
+++ b/tests/cloud_tests/testcases/examples/run_apt_upgrade.yaml
diff --git a/tests/cloud_tests/configs/examples/run_commands.yaml b/tests/cloud_tests/testcases/examples/run_commands.yaml
index b0e311b..b0e311b 100644
--- a/tests/cloud_tests/configs/examples/run_commands.yaml
+++ b/tests/cloud_tests/testcases/examples/run_commands.yaml
diff --git a/tests/cloud_tests/configs/examples/run_commands_first_boot.yaml b/tests/cloud_tests/testcases/examples/run_commands_first_boot.yaml
index 7bd803d..7bd803d 100644
--- a/tests/cloud_tests/configs/examples/run_commands_first_boot.yaml
+++ b/tests/cloud_tests/testcases/examples/run_commands_first_boot.yaml
diff --git a/tests/cloud_tests/configs/examples/setup_run_puppet.yaml b/tests/cloud_tests/testcases/examples/setup_run_puppet.yaml
index e366c04..e366c04 100644
--- a/tests/cloud_tests/configs/examples/setup_run_puppet.yaml
+++ b/tests/cloud_tests/testcases/examples/setup_run_puppet.yaml
diff --git a/tests/cloud_tests/configs/examples/writing_out_arbitrary_files.yaml b/tests/cloud_tests/testcases/examples/writing_out_arbitrary_files.yaml
index 6f78f99..6f78f99 100644
--- a/tests/cloud_tests/configs/examples/writing_out_arbitrary_files.yaml
+++ b/tests/cloud_tests/testcases/examples/writing_out_arbitrary_files.yaml
diff --git a/tests/cloud_tests/configs/main/README.md b/tests/cloud_tests/testcases/main/README.md
index 6034606..6034606 100644
--- a/tests/cloud_tests/configs/main/README.md
+++ b/tests/cloud_tests/testcases/main/README.md
diff --git a/tests/cloud_tests/configs/main/command_output_simple.yaml b/tests/cloud_tests/testcases/main/command_output_simple.yaml
index 08ca894..08ca894 100644
--- a/tests/cloud_tests/configs/main/command_output_simple.yaml
+++ b/tests/cloud_tests/testcases/main/command_output_simple.yaml
diff --git a/tests/cloud_tests/configs/modules/README.md b/tests/cloud_tests/testcases/modules/README.md
index d66101f..d66101f 100644
--- a/tests/cloud_tests/configs/modules/README.md
+++ b/tests/cloud_tests/testcases/modules/README.md
diff --git a/tests/cloud_tests/configs/modules/TODO.md b/tests/cloud_tests/testcases/modules/TODO.md
index d496da9..0b933b3 100644
--- a/tests/cloud_tests/configs/modules/TODO.md
+++ b/tests/cloud_tests/testcases/modules/TODO.md
@@ -89,8 +89,6 @@ Not applicable to write a test for this as it specifies when something should be
 ## ssh authkey fingerprints
 The authkey_hash key does not appear to work. In fact the default claims to be md5, however syslog only shows sha256
 
-## ubuntu init switch
-
 ## update etc hosts
 2016-11-17: Issues with changing /etc/hosts and lxc backend.
 
diff --git a/tests/cloud_tests/configs/modules/apt_configure_conf.yaml b/tests/cloud_tests/testcases/modules/apt_configure_conf.yaml
index de45300..de45300 100644
--- a/tests/cloud_tests/configs/modules/apt_configure_conf.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_configure_conf.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_configure_disable_suites.yaml b/tests/cloud_tests/testcases/modules/apt_configure_disable_suites.yaml
index 9880067..9880067 100644
--- a/tests/cloud_tests/configs/modules/apt_configure_disable_suites.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_configure_disable_suites.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_configure_primary.yaml b/tests/cloud_tests/testcases/modules/apt_configure_primary.yaml
index 41bcf2f..41bcf2f 100644
--- a/tests/cloud_tests/configs/modules/apt_configure_primary.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_configure_primary.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_configure_proxy.yaml b/tests/cloud_tests/testcases/modules/apt_configure_proxy.yaml
index be6c6f8..be6c6f8 100644
--- a/tests/cloud_tests/configs/modules/apt_configure_proxy.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_configure_proxy.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_configure_security.yaml b/tests/cloud_tests/testcases/modules/apt_configure_security.yaml
index 83dd51d..83dd51d 100644
--- a/tests/cloud_tests/configs/modules/apt_configure_security.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_configure_security.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_configure_sources_key.yaml b/tests/cloud_tests/testcases/modules/apt_configure_sources_key.yaml
index bde9398..bde9398 100644
--- a/tests/cloud_tests/configs/modules/apt_configure_sources_key.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_configure_sources_key.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_configure_sources_keyserver.yaml b/tests/cloud_tests/testcases/modules/apt_configure_sources_keyserver.yaml
index 2508813..2508813 100644
--- a/tests/cloud_tests/configs/modules/apt_configure_sources_keyserver.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_configure_sources_keyserver.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_configure_sources_list.yaml b/tests/cloud_tests/testcases/modules/apt_configure_sources_list.yaml
index 143cb08..143cb08 100644
--- a/tests/cloud_tests/configs/modules/apt_configure_sources_list.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_configure_sources_list.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_configure_sources_ppa.yaml b/tests/cloud_tests/testcases/modules/apt_configure_sources_ppa.yaml
index 9efdae5..9efdae5 100644
--- a/tests/cloud_tests/configs/modules/apt_configure_sources_ppa.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_configure_sources_ppa.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_pipelining_disable.yaml b/tests/cloud_tests/testcases/modules/apt_pipelining_disable.yaml
index bd9b5d0..bd9b5d0 100644
--- a/tests/cloud_tests/configs/modules/apt_pipelining_disable.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_pipelining_disable.yaml
diff --git a/tests/cloud_tests/configs/modules/apt_pipelining_os.yaml b/tests/cloud_tests/testcases/modules/apt_pipelining_os.yaml
index cbed3ba..cbed3ba 100644
--- a/tests/cloud_tests/configs/modules/apt_pipelining_os.yaml
+++ b/tests/cloud_tests/testcases/modules/apt_pipelining_os.yaml
diff --git a/tests/cloud_tests/configs/modules/bootcmd.yaml b/tests/cloud_tests/testcases/modules/bootcmd.yaml
index 3a73994..3a73994 100644
--- a/tests/cloud_tests/configs/modules/bootcmd.yaml
+++ b/tests/cloud_tests/testcases/modules/bootcmd.yaml
diff --git a/tests/cloud_tests/configs/modules/byobu.yaml b/tests/cloud_tests/testcases/modules/byobu.yaml
index a9aa1f3..a9aa1f3 100644
--- a/tests/cloud_tests/configs/modules/byobu.yaml
+++ b/tests/cloud_tests/testcases/modules/byobu.yaml
diff --git a/tests/cloud_tests/configs/modules/ca_certs.yaml b/tests/cloud_tests/testcases/modules/ca_certs.yaml
index d939f43..d939f43 100644
--- a/tests/cloud_tests/configs/modules/ca_certs.yaml
+++ b/tests/cloud_tests/testcases/modules/ca_certs.yaml
diff --git a/tests/cloud_tests/configs/modules/debug_disable.yaml b/tests/cloud_tests/testcases/modules/debug_disable.yaml
index 63218b1..63218b1 100644
--- a/tests/cloud_tests/configs/modules/debug_disable.yaml
+++ b/tests/cloud_tests/testcases/modules/debug_disable.yaml
diff --git a/tests/cloud_tests/configs/modules/debug_enable.yaml b/tests/cloud_tests/testcases/modules/debug_enable.yaml
index d44147d..d44147d 100644
--- a/tests/cloud_tests/configs/modules/debug_enable.yaml
+++ b/tests/cloud_tests/testcases/modules/debug_enable.yaml
diff --git a/tests/cloud_tests/configs/modules/final_message.yaml b/tests/cloud_tests/testcases/modules/final_message.yaml
index c9ed611..c9ed611 100644
--- a/tests/cloud_tests/configs/modules/final_message.yaml
+++ b/tests/cloud_tests/testcases/modules/final_message.yaml
diff --git a/tests/cloud_tests/configs/modules/keys_to_console.yaml b/tests/cloud_tests/testcases/modules/keys_to_console.yaml
index 5d86e73..5d86e73 100644
--- a/tests/cloud_tests/configs/modules/keys_to_console.yaml
+++ b/tests/cloud_tests/testcases/modules/keys_to_console.yaml
diff --git a/tests/cloud_tests/configs/modules/landscape.yaml b/tests/cloud_tests/testcases/modules/landscape.yaml
index ed2c37c..ed2c37c 100644
--- a/tests/cloud_tests/configs/modules/landscape.yaml
+++ b/tests/cloud_tests/testcases/modules/landscape.yaml
diff --git a/tests/cloud_tests/configs/modules/locale.yaml b/tests/cloud_tests/testcases/modules/locale.yaml
index e01518a..e01518a 100644
--- a/tests/cloud_tests/configs/modules/locale.yaml
+++ b/tests/cloud_tests/testcases/modules/locale.yaml
diff --git a/tests/cloud_tests/configs/modules/lxd_bridge.yaml b/tests/cloud_tests/testcases/modules/lxd_bridge.yaml
index e6b7e76..e6b7e76 100644
--- a/tests/cloud_tests/configs/modules/lxd_bridge.yaml
+++ b/tests/cloud_tests/testcases/modules/lxd_bridge.yaml
diff --git a/tests/cloud_tests/configs/modules/lxd_dir.yaml b/tests/cloud_tests/testcases/modules/lxd_dir.yaml
index f93a3fa..f93a3fa 100644
--- a/tests/cloud_tests/configs/modules/lxd_dir.yaml
+++ b/tests/cloud_tests/testcases/modules/lxd_dir.yaml
diff --git a/tests/cloud_tests/configs/modules/ntp.yaml b/tests/cloud_tests/testcases/modules/ntp.yaml
index fbef431..fbef431 100644
--- a/tests/cloud_tests/configs/modules/ntp.yaml
+++ b/tests/cloud_tests/testcases/modules/ntp.yaml
diff --git a/tests/cloud_tests/configs/modules/ntp_pools.yaml b/tests/cloud_tests/testcases/modules/ntp_pools.yaml
index 3a93faa..3a93faa 100644
--- a/tests/cloud_tests/configs/modules/ntp_pools.yaml
+++ b/tests/cloud_tests/testcases/modules/ntp_pools.yaml
diff --git a/tests/cloud_tests/configs/modules/ntp_servers.yaml b/tests/cloud_tests/testcases/modules/ntp_servers.yaml
index d59d45a..d59d45a 100644
--- a/tests/cloud_tests/configs/modules/ntp_servers.yaml
+++ b/tests/cloud_tests/testcases/modules/ntp_servers.yaml
diff --git a/tests/cloud_tests/configs/modules/package_update_upgrade_install.yaml b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml
index 71d24b8..71d24b8 100644
--- a/tests/cloud_tests/configs/modules/package_update_upgrade_install.yaml
+++ b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml
diff --git a/tests/cloud_tests/configs/modules/runcmd.yaml b/tests/cloud_tests/testcases/modules/runcmd.yaml
index 04e5a05..04e5a05 100644
--- a/tests/cloud_tests/configs/modules/runcmd.yaml
+++ b/tests/cloud_tests/testcases/modules/runcmd.yaml
diff --git a/tests/cloud_tests/configs/modules/salt_minion.yaml b/tests/cloud_tests/testcases/modules/salt_minion.yaml
index f20d24f..f20d24f 100644
--- a/tests/cloud_tests/configs/modules/salt_minion.yaml
+++ b/tests/cloud_tests/testcases/modules/salt_minion.yaml
diff --git a/tests/cloud_tests/configs/modules/seed_random_command.yaml b/tests/cloud_tests/testcases/modules/seed_random_command.yaml
index 6a9157e..6a9157e 100644
--- a/tests/cloud_tests/configs/modules/seed_random_command.yaml
+++ b/tests/cloud_tests/testcases/modules/seed_random_command.yaml
diff --git a/tests/cloud_tests/configs/modules/seed_random_data.yaml b/tests/cloud_tests/testcases/modules/seed_random_data.yaml
index a9b2c88..a9b2c88 100644
--- a/tests/cloud_tests/configs/modules/seed_random_data.yaml
+++ b/tests/cloud_tests/testcases/modules/seed_random_data.yaml
diff --git a/tests/cloud_tests/configs/modules/set_hostname.yaml b/tests/cloud_tests/testcases/modules/set_hostname.yaml
index c96344c..c96344c 100644
--- a/tests/cloud_tests/configs/modules/set_hostname.yaml
+++ b/tests/cloud_tests/testcases/modules/set_hostname.yaml
diff --git a/tests/cloud_tests/configs/modules/set_hostname_fqdn.yaml b/tests/cloud_tests/testcases/modules/set_hostname_fqdn.yaml
index daf7593..daf7593 100644
--- a/tests/cloud_tests/configs/modules/set_hostname_fqdn.yaml
+++ b/tests/cloud_tests/testcases/modules/set_hostname_fqdn.yaml
diff --git a/tests/cloud_tests/configs/modules/set_password.yaml b/tests/cloud_tests/testcases/modules/set_password.yaml
index 04d7c58..04d7c58 100644
--- a/tests/cloud_tests/configs/modules/set_password.yaml
+++ b/tests/cloud_tests/testcases/modules/set_password.yaml
diff --git a/tests/cloud_tests/configs/modules/set_password_expire.yaml b/tests/cloud_tests/testcases/modules/set_password_expire.yaml
index 789604b..789604b 100644
--- a/tests/cloud_tests/configs/modules/set_password_expire.yaml
+++ b/tests/cloud_tests/testcases/modules/set_password_expire.yaml
diff --git a/tests/cloud_tests/configs/modules/set_password_list.yaml b/tests/cloud_tests/testcases/modules/set_password_list.yaml
index a2a89c9..a2a89c9 100644
--- a/tests/cloud_tests/configs/modules/set_password_list.yaml
+++ b/tests/cloud_tests/testcases/modules/set_password_list.yaml
diff --git a/tests/cloud_tests/configs/modules/set_password_list_string.yaml b/tests/cloud_tests/testcases/modules/set_password_list_string.yaml
index c2a0f63..c2a0f63 100644
--- a/tests/cloud_tests/configs/modules/set_password_list_string.yaml
+++ b/tests/cloud_tests/testcases/modules/set_password_list_string.yaml
diff --git a/tests/cloud_tests/configs/modules/snappy.yaml b/tests/cloud_tests/testcases/modules/snappy.yaml
index 43f9329..43f9329 100644
--- a/tests/cloud_tests/configs/modules/snappy.yaml
+++ b/tests/cloud_tests/testcases/modules/snappy.yaml
diff --git a/tests/cloud_tests/configs/modules/ssh_auth_key_fingerprints_disable.yaml b/tests/cloud_tests/testcases/modules/ssh_auth_key_fingerprints_disable.yaml
index 746653e..746653e 100644
--- a/tests/cloud_tests/configs/modules/ssh_auth_key_fingerprints_disable.yaml
+++ b/tests/cloud_tests/testcases/modules/ssh_auth_key_fingerprints_disable.yaml
diff --git a/tests/cloud_tests/configs/modules/ssh_auth_key_fingerprints_enable.yaml b/tests/cloud_tests/testcases/modules/ssh_auth_key_fingerprints_enable.yaml
index 9f5dc34..9f5dc34 100644
--- a/tests/cloud_tests/configs/modules/ssh_auth_key_fingerprints_enable.yaml
+++ b/tests/cloud_tests/testcases/modules/ssh_auth_key_fingerprints_enable.yaml
diff --git a/tests/cloud_tests/configs/modules/ssh_import_id.yaml b/tests/cloud_tests/testcases/modules/ssh_import_id.yaml
index b62d3f6..b62d3f6 100644
--- a/tests/cloud_tests/configs/modules/ssh_import_id.yaml
+++ b/tests/cloud_tests/testcases/modules/ssh_import_id.yaml
diff --git a/tests/cloud_tests/configs/modules/ssh_keys_generate.yaml b/tests/cloud_tests/testcases/modules/ssh_keys_generate.yaml
index 659fd93..659fd93 100644
--- a/tests/cloud_tests/configs/modules/ssh_keys_generate.yaml
+++ b/tests/cloud_tests/testcases/modules/ssh_keys_generate.yaml
diff --git a/tests/cloud_tests/configs/modules/ssh_keys_provided.yaml b/tests/cloud_tests/testcases/modules/ssh_keys_provided.yaml
index 5ceb362..5ceb362 100644
--- a/tests/cloud_tests/configs/modules/ssh_keys_provided.yaml
+++ b/tests/cloud_tests/testcases/modules/ssh_keys_provided.yaml
diff --git a/tests/cloud_tests/configs/modules/timezone.yaml b/tests/cloud_tests/testcases/modules/timezone.yaml
index 5112aa9..5112aa9 100644
--- a/tests/cloud_tests/configs/modules/timezone.yaml
+++ b/tests/cloud_tests/testcases/modules/timezone.yaml
diff --git a/tests/cloud_tests/configs/modules/user_groups.yaml b/tests/cloud_tests/testcases/modules/user_groups.yaml
index 71cc9da..71cc9da 100644
--- a/tests/cloud_tests/configs/modules/user_groups.yaml
+++ b/tests/cloud_tests/testcases/modules/user_groups.yaml
diff --git a/tests/cloud_tests/configs/modules/write_files.yaml b/tests/cloud_tests/testcases/modules/write_files.yaml
index ce936b7..ce936b7 100644
--- a/tests/cloud_tests/configs/modules/write_files.yaml
+++ b/tests/cloud_tests/testcases/modules/write_files.yaml
diff --git a/tests/cloud_tests/util.py b/tests/cloud_tests/util.py
index 2bbe21c..4357fbb 100644
--- a/tests/cloud_tests/util.py
+++ b/tests/cloud_tests/util.py
@@ -2,12 +2,14 @@
 
 """Utilities for re-use across integration tests."""
 
+import base64
 import copy
 import glob
 import os
 import random
 import shutil
 import string
+import subprocess
 import tempfile
 import yaml
 
@@ -242,6 +244,47 @@ def update_user_data(user_data, updates, dump_to_yaml=True):
             if dump_to_yaml else user_data)
 
 
+def shell_safe(cmd):
+    """Produce string safe shell string.
+
+    Create a string that can be passed to:
+         set -- <string>
+    to produce the same array that cmd represents.
+
+    Internally we utilize 'getopt's ability/knowledge on how to quote
+    strings to be safe for shell.  This implementation could be changed
+    to be pure python.  It is just a matter of correctly escaping
+    or quoting characters like: ' " ^ & $ ; ( ) ...
+
+    @param cmd: command as a list
+    """
+    out = subprocess.check_output(
+        ["getopt", "--shell", "sh", "--options", "", "--", "--"] + list(cmd))
+    # out contains ' -- <data>\n'. drop the ' -- ' and the '\n'
+    return out[4:-1].decode()
+
+
+def shell_pack(cmd):
+    """Return a string that can shuffled through 'sh' and execute cmd.
+
+    In Python subprocess terms:
+        check_output(cmd) == check_output(shell_pack(cmd), shell=True)
+
+    @param cmd: list or string of command to pack up
+    """
+
+    if isinstance(cmd, str):
+        cmd = [cmd]
+    else:
+        cmd = list(cmd)
+
+    stuffed = shell_safe(cmd)
+    # for whatever reason b64encode returns bytes when it is clearly
+    # representable as a string by nature of being base64 encoded.
+    b64 = base64.b64encode(stuffed.encode()).decode()
+    return 'eval set -- "$(echo %s | base64 --decode)" && exec "$@"' % b64
+
+
 class InTargetExecuteError(c_util.ProcessExecutionError):
     """Error type for in target commands that fail."""
 
diff --git a/tests/unittests/test__init__.py b/tests/unittests/test__init__.py
index 781f6d5..25878d7 100644
--- a/tests/unittests/test__init__.py
+++ b/tests/unittests/test__init__.py
@@ -12,7 +12,7 @@ from cloudinit import settings
 from cloudinit import url_helper
 from cloudinit import util
 
-from .helpers import TestCase, CiTestCase, ExitStack, mock
+from cloudinit.tests.helpers import TestCase, CiTestCase, ExitStack, mock
 
 
 class FakeModule(handlers.Handler):
diff --git a/tests/unittests/test_atomic_helper.py b/tests/unittests/test_atomic_helper.py
index 515919d..0101b0e 100644
--- a/tests/unittests/test_atomic_helper.py
+++ b/tests/unittests/test_atomic_helper.py
@@ -6,7 +6,7 @@ import stat
 
 from cloudinit import atomic_helper
 
-from .helpers import CiTestCase
+from cloudinit.tests.helpers import CiTestCase
 
 
 class TestAtomicHelper(CiTestCase):
diff --git a/tests/unittests/test_builtin_handlers.py b/tests/unittests/test_builtin_handlers.py
index dd9d035..9751ed9 100644
--- a/tests/unittests/test_builtin_handlers.py
+++ b/tests/unittests/test_builtin_handlers.py
@@ -11,7 +11,7 @@ try:
 except ImportError:
     import mock
 
-from . import helpers as test_helpers
+from cloudinit.tests import helpers as test_helpers
 
 from cloudinit import handlers
 from cloudinit import helpers
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index 06f366b..fccbbd2 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -2,7 +2,7 @@
 
 import six
 
-from . import helpers as test_helpers
+from cloudinit.tests import helpers as test_helpers
 
 from cloudinit.cmd import main as cli
 
@@ -31,9 +31,151 @@ class TestCLI(test_helpers.FilesystemMockingTestCase):
 
     def test_no_arguments_shows_error_message(self):
         exit_code = self._call_main()
-        self.assertIn('cloud-init: error: too few arguments',
-                      self.stderr.getvalue())
+        missing_subcommand_message = [
+            'too few arguments',  # python2.7 msg
+            'the following arguments are required: subcommand'  # python3 msg
+        ]
+        error = self.stderr.getvalue()
+        matches = ([msg in error for msg in missing_subcommand_message])
+        self.assertTrue(
+            any(matches), 'Did not find error message for missing subcommand')
         self.assertEqual(2, exit_code)
 
+    def test_all_subcommands_represented_in_help(self):
+        """All known subparsers are represented in the cloud-int help doc."""
+        self._call_main()
+        error = self.stderr.getvalue()
+        expected_subcommands = ['analyze', 'init', 'modules', 'single',
+                                'dhclient-hook', 'features', 'devel']
+        for subcommand in expected_subcommands:
+            self.assertIn(subcommand, error)
 
-# vi: ts=4 expandtab
+    @mock.patch('cloudinit.cmd.main.status_wrapper')
+    def test_init_subcommand_parser(self, m_status_wrapper):
+        """The subcommand 'init' calls status_wrapper passing init."""
+        self._call_main(['cloud-init', 'init'])
+        (name, parseargs) = m_status_wrapper.call_args_list[0][0]
+        self.assertEqual('init', name)
+        self.assertEqual('init', parseargs.subcommand)
+        self.assertEqual('init', parseargs.action[0])
+        self.assertEqual('main_init', parseargs.action[1].__name__)
+
+    @mock.patch('cloudinit.cmd.main.status_wrapper')
+    def test_modules_subcommand_parser(self, m_status_wrapper):
+        """The subcommand 'modules' calls status_wrapper passing modules."""
+        self._call_main(['cloud-init', 'modules'])
+        (name, parseargs) = m_status_wrapper.call_args_list[0][0]
+        self.assertEqual('modules', name)
+        self.assertEqual('modules', parseargs.subcommand)
+        self.assertEqual('modules', parseargs.action[0])
+        self.assertEqual('main_modules', parseargs.action[1].__name__)
+
+    def test_conditional_subcommands_from_entry_point_sys_argv(self):
+        """Subcommands from entry-point are properly parsed from sys.argv."""
+        stdout = six.StringIO()
+        self.patchStdoutAndStderr(stdout=stdout)
+
+        expected_errors = [
+            'usage: cloud-init analyze', 'usage: cloud-init collect-logs',
+            'usage: cloud-init devel']
+        conditional_subcommands = ['analyze', 'collect-logs', 'devel']
+        # The cloud-init entrypoint calls main without passing sys_argv
+        for subcommand in conditional_subcommands:
+            with mock.patch('sys.argv', ['cloud-init', subcommand, '-h']):
+                try:
+                    cli.main()
+                except SystemExit as e:
+                    self.assertEqual(0, e.code)  # exit 2 on proper -h usage
+        for error_message in expected_errors:
+            self.assertIn(error_message, stdout.getvalue())
+
+    def test_analyze_subcommand_parser(self):
+        """The subcommand cloud-init analyze calls the correct subparser."""
+        self._call_main(['cloud-init', 'analyze'])
+        # These subcommands only valid for cloud-init analyze script
+        expected_subcommands = ['blame', 'show', 'dump']
+        error = self.stderr.getvalue()
+        for subcommand in expected_subcommands:
+            self.assertIn(subcommand, error)
+
+    def test_collect_logs_subcommand_parser(self):
+        """The subcommand cloud-init collect-logs calls the subparser."""
+        # Provide -h param to collect-logs to avoid having to mock behavior.
+        stdout = six.StringIO()
+        self.patchStdoutAndStderr(stdout=stdout)
+        self._call_main(['cloud-init', 'collect-logs', '-h'])
+        self.assertIn('usage: cloud-init collect-log', stdout.getvalue())
+
+    def test_devel_subcommand_parser(self):
+        """The subcommand cloud-init devel calls the correct subparser."""
+        self._call_main(['cloud-init', 'devel'])
+        # These subcommands only valid for cloud-init schema script
+        expected_subcommands = ['schema']
+        error = self.stderr.getvalue()
+        for subcommand in expected_subcommands:
+            self.assertIn(subcommand, error)
+
+    @mock.patch('cloudinit.config.schema.handle_schema_args')
+    def test_wb_devel_schema_subcommand_parser(self, m_schema):
+        """The subcommand cloud-init schema calls the correct subparser."""
+        exit_code = self._call_main(['cloud-init', 'devel', 'schema'])
+        self.assertEqual(1, exit_code)
+        # Known whitebox output from schema subcommand
+        self.assertEqual(
+            'Expected either --config-file argument or --doc\n',
+            self.stderr.getvalue())
+
+    def test_wb_devel_schema_subcommand_doc_content(self):
+        """Validate that doc content is sane from known examples."""
+        stdout = six.StringIO()
+        self.patchStdoutAndStderr(stdout=stdout)
+        self._call_main(['cloud-init', 'devel', 'schema', '--doc'])
+        expected_doc_sections = [
+            '**Supported distros:** all',
+            '**Supported distros:** centos, debian, fedora',
+            '**Config schema**:\n    **resize_rootfs:** (true/false/noblock)',
+            '**Examples**::\n\n    runcmd:\n        - [ ls, -l, / ]\n'
+        ]
+        stdout = stdout.getvalue()
+        for expected in expected_doc_sections:
+            self.assertIn(expected, stdout)
+
+    @mock.patch('cloudinit.cmd.main.main_single')
+    def test_single_subcommand(self, m_main_single):
+        """The subcommand 'single' calls main_single with valid args."""
+        self._call_main(['cloud-init', 'single', '--name', 'cc_ntp'])
+        (name, parseargs) = m_main_single.call_args_list[0][0]
+        self.assertEqual('single', name)
+        self.assertEqual('single', parseargs.subcommand)
+        self.assertEqual('single', parseargs.action[0])
+        self.assertFalse(parseargs.debug)
+        self.assertFalse(parseargs.force)
+        self.assertIsNone(parseargs.frequency)
+        self.assertEqual('cc_ntp', parseargs.name)
+        self.assertFalse(parseargs.report)
+
+    @mock.patch('cloudinit.cmd.main.dhclient_hook')
+    def test_dhclient_hook_subcommand(self, m_dhclient_hook):
+        """The subcommand 'dhclient-hook' calls dhclient_hook with args."""
+        self._call_main(['cloud-init', 'dhclient-hook', 'net_action', 'eth0'])
+        (name, parseargs) = m_dhclient_hook.call_args_list[0][0]
+        self.assertEqual('dhclient_hook', name)
+        self.assertEqual('dhclient-hook', parseargs.subcommand)
+        self.assertEqual('dhclient_hook', parseargs.action[0])
+        self.assertFalse(parseargs.debug)
+        self.assertFalse(parseargs.force)
+        self.assertEqual('net_action', parseargs.net_action)
+        self.assertEqual('eth0', parseargs.net_interface)
+
+    @mock.patch('cloudinit.cmd.main.main_features')
+    def test_features_hook_subcommand(self, m_features):
+        """The subcommand 'features' calls main_features with args."""
+        self._call_main(['cloud-init', 'features'])
+        (name, parseargs) = m_features.call_args_list[0][0]
+        self.assertEqual('features', name)
+        self.assertEqual('features', parseargs.subcommand)
+        self.assertEqual('features', parseargs.action[0])
+        self.assertFalse(parseargs.debug)
+        self.assertFalse(parseargs.force)
+
+# : ts=4 expandtab
diff --git a/tests/unittests/test_cs_util.py b/tests/unittests/test_cs_util.py
index b8f5031..ee88520 100644
--- a/tests/unittests/test_cs_util.py
+++ b/tests/unittests/test_cs_util.py
@@ -2,7 +2,7 @@
 
 from __future__ import print_function
 
-from . import helpers as test_helpers
+from cloudinit.tests import helpers as test_helpers
 
 from cloudinit.cs_utils import Cepko
 
diff --git a/tests/unittests/test_data.py b/tests/unittests/test_data.py
index 4ad86bb..6d621d2 100644
--- a/tests/unittests/test_data.py
+++ b/tests/unittests/test_data.py
@@ -27,7 +27,7 @@ from cloudinit import stages
 from cloudinit import user_data as ud
 from cloudinit import util
 
-from . import helpers
+from cloudinit.tests import helpers
 
 
 INSTANCE_ID = "i-testing"
diff --git a/tests/unittests/test_datasource/test_aliyun.py b/tests/unittests/test_datasource/test_aliyun.py
index 990bff2..82ee971 100644
--- a/tests/unittests/test_datasource/test_aliyun.py
+++ b/tests/unittests/test_datasource/test_aliyun.py
@@ -5,9 +5,9 @@ import httpretty
 import mock
 import os
 
-from .. import helpers as test_helpers
 from cloudinit import helpers
 from cloudinit.sources import DataSourceAliYun as ay
+from cloudinit.tests import helpers as test_helpers
 
 DEFAULT_METADATA = {
     'instance-id': 'aliyun-test-vm-00',
@@ -70,7 +70,6 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase):
         paths = helpers.Paths({})
         self.ds = ay.DataSourceAliYun(cfg, distro, paths)
         self.metadata_address = self.ds.metadata_urls[0]
-        self.api_ver = self.ds.api_ver
 
     @property
     def default_metadata(self):
@@ -82,13 +81,15 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase):
 
     @property
     def metadata_url(self):
-        return os.path.join(self.metadata_address,
-                            self.api_ver, 'meta-data') + '/'
+        return os.path.join(
+            self.metadata_address,
+            self.ds.min_metadata_version, 'meta-data') + '/'
 
     @property
     def userdata_url(self):
-        return os.path.join(self.metadata_address,
-                            self.api_ver, 'user-data')
+        return os.path.join(
+            self.metadata_address,
+            self.ds.min_metadata_version, 'user-data')
 
     def regist_default_server(self):
         register_mock_metaserver(self.metadata_url, self.default_metadata)
diff --git a/tests/unittests/test_datasource/test_altcloud.py b/tests/unittests/test_datasource/test_altcloud.py
index 9c46abc..a4dfb54 100644
--- a/tests/unittests/test_datasource/test_altcloud.py
+++ b/tests/unittests/test_datasource/test_altcloud.py
@@ -18,7 +18,7 @@ import tempfile
 from cloudinit import helpers
 from cloudinit import util
 
-from ..helpers import TestCase
+from cloudinit.tests.helpers import TestCase
 
 import cloudinit.sources.DataSourceAltCloud as dsac
 
@@ -280,8 +280,8 @@ class TestUserDataRhevm(TestCase):
             pass
 
         dsac.CLOUD_INFO_FILE = '/etc/sysconfig/cloud-info'
-        dsac.CMD_PROBE_FLOPPY = ['/sbin/modprobe', 'floppy']
-        dsac.CMD_UDEVADM_SETTLE = ['/sbin/udevadm', 'settle',
+        dsac.CMD_PROBE_FLOPPY = ['modprobe', 'floppy']
+        dsac.CMD_UDEVADM_SETTLE = ['udevadm', 'settle',
                                    '--quiet', '--timeout=5']
 
     def test_mount_cb_fails(self):
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index 20e70fb..0a11777 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -6,8 +6,8 @@ from cloudinit.sources import DataSourceAzure as dsaz
 from cloudinit.util import find_freebsd_part
 from cloudinit.util import get_path_dev_freebsd
 
-from ..helpers import (CiTestCase, TestCase, populate_dir, mock,
-                       ExitStack, PY26, SkipTest)
+from cloudinit.tests.helpers import (CiTestCase, TestCase, populate_dir, mock,
+                                     ExitStack, PY26, SkipTest)
 
 import crypt
 import os
@@ -871,6 +871,7 @@ class TestLoadAzureDsDir(CiTestCase):
 
 
 class TestReadAzureOvf(TestCase):
+
     def test_invalid_xml_raises_non_azure_ds(self):
         invalid_xml = "<foo>" + construct_valid_ovf_env(data={})
         self.assertRaises(dsaz.BrokenAzureDataSource,
@@ -1079,6 +1080,7 @@ class TestCanDevBeReformatted(CiTestCase):
 
 
 class TestAzureNetExists(CiTestCase):
+
     def test_azure_net_must_exist_for_legacy_objpkl(self):
         """DataSourceAzureNet must exist for old obj.pkl files
            that reference it."""
diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
index b2d2971..b42b073 100644
--- a/tests/unittests/test_datasource/test_azure_helper.py
+++ b/tests/unittests/test_datasource/test_azure_helper.py
@@ -1,10 +1,12 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 import os
+from textwrap import dedent
 
 from cloudinit.sources.helpers import azure as azure_helper
-from ..helpers import ExitStack, mock, TestCase
+from cloudinit.tests.helpers import CiTestCase, ExitStack, mock, populate_dir
 
+from cloudinit.sources.helpers.azure import WALinuxAgentShim as wa_shim
 
 GOAL_STATE_TEMPLATE = """\
 <?xml version="1.0" encoding="utf-8"?>
@@ -45,7 +47,7 @@ GOAL_STATE_TEMPLATE = """\
 """
 
 
-class TestFindEndpoint(TestCase):
+class TestFindEndpoint(CiTestCase):
 
     def setUp(self):
         super(TestFindEndpoint, self).setUp()
@@ -56,18 +58,19 @@ class TestFindEndpoint(TestCase):
             mock.patch.object(azure_helper.util, 'load_file'))
 
         self.dhcp_options = patches.enter_context(
-            mock.patch.object(azure_helper.WALinuxAgentShim,
-                              '_load_dhclient_json'))
+            mock.patch.object(wa_shim, '_load_dhclient_json'))
+
+        self.networkd_leases = patches.enter_context(
+            mock.patch.object(wa_shim, '_networkd_get_value_from_leases'))
+        self.networkd_leases.return_value = None
 
     def test_missing_file(self):
-        self.assertRaises(ValueError,
-                          azure_helper.WALinuxAgentShim.find_endpoint)
+        self.assertRaises(ValueError, wa_shim.find_endpoint)
 
     def test_missing_special_azure_line(self):
         self.load_file.return_value = ''
         self.dhcp_options.return_value = {'eth0': {'key': 'value'}}
-        self.assertRaises(ValueError,
-                          azure_helper.WALinuxAgentShim.find_endpoint)
+        self.assertRaises(ValueError, wa_shim.find_endpoint)
 
     @staticmethod
     def _build_lease_content(encoded_address):
@@ -80,8 +83,7 @@ class TestFindEndpoint(TestCase):
 
     def test_from_dhcp_client(self):
         self.dhcp_options.return_value = {"eth0": {"unknown_245": "5:4:3:2"}}
-        self.assertEqual('5.4.3.2',
-                         azure_helper.WALinuxAgentShim.find_endpoint(None))
+        self.assertEqual('5.4.3.2', wa_shim.find_endpoint(None))
 
     def test_latest_lease_used(self):
         encoded_addresses = ['5:4:3:2', '4:3:2:1']
@@ -89,53 +91,38 @@ class TestFindEndpoint(TestCase):
                                   for encoded_address in encoded_addresses])
         self.load_file.return_value = file_content
         self.assertEqual(encoded_addresses[-1].replace(':', '.'),
-                         azure_helper.WALinuxAgentShim.find_endpoint("foobar"))
+                         wa_shim.find_endpoint("foobar"))
 
 
-class TestExtractIpAddressFromLeaseValue(TestCase):
+class TestExtractIpAddressFromLeaseValue(CiTestCase):
 
     def test_hex_string(self):
         ip_address, encoded_address = '98.76.54.32', '62:4c:36:20'
         self.assertEqual(
-            ip_address,
-            azure_helper.WALinuxAgentShim.get_ip_from_lease_value(
-                encoded_address
-            ))
+            ip_address, wa_shim.get_ip_from_lease_value(encoded_address))
 
     def test_hex_string_with_single_character_part(self):
         ip_address, encoded_address = '4.3.2.1', '4:3:2:1'
         self.assertEqual(
-            ip_address,
-            azure_helper.WALinuxAgentShim.get_ip_from_lease_value(
-                encoded_address
-            ))
+            ip_address, wa_shim.get_ip_from_lease_value(encoded_address))
 
     def test_packed_string(self):
         ip_address, encoded_address = '98.76.54.32', 'bL6 '
         self.assertEqual(
-            ip_address,
-            azure_helper.WALinuxAgentShim.get_ip_from_lease_value(
-                encoded_address
-            ))
+            ip_address, wa_shim.get_ip_from_lease_value(encoded_address))
 
     def test_packed_string_with_escaped_quote(self):
         ip_address, encoded_address = '100.72.34.108', 'dH\\"l'
         self.assertEqual(
-            ip_address,
-            azure_helper.WALinuxAgentShim.get_ip_from_lease_value(
-                encoded_address
-            ))
+            ip_address, wa_shim.get_ip_from_lease_value(encoded_address))
 
     def test_packed_string_containing_a_colon(self):
         ip_address, encoded_address = '100.72.58.108', 'dH:l'
         self.assertEqual(
-            ip_address,
-            azure_helper.WALinuxAgentShim.get_ip_from_lease_value(
-                encoded_address
-            ))
+            ip_address, wa_shim.get_ip_from_lease_value(encoded_address))
 
 
-class TestGoalStateParsing(TestCase):
+class TestGoalStateParsing(CiTestCase):
 
     default_parameters = {
         'incarnation': 1,
@@ -195,7 +182,7 @@ class TestGoalStateParsing(TestCase):
         self.assertIsNone(certificates_xml)
 
 
-class TestAzureEndpointHttpClient(TestCase):
+class TestAzureEndpointHttpClient(CiTestCase):
 
     regular_headers = {
         'x-ms-agent-name': 'WALinuxAgent',
@@ -258,7 +245,7 @@ class TestAzureEndpointHttpClient(TestCase):
             self.read_file_or_url.call_args)
 
 
-class TestOpenSSLManager(TestCase):
+class TestOpenSSLManager(CiTestCase):
 
     def setUp(self):
         super(TestOpenSSLManager, self).setUp()
@@ -275,7 +262,7 @@ class TestOpenSSLManager(TestCase):
                 mock.patch('builtins.open'))
 
     @mock.patch.object(azure_helper, 'cd', mock.MagicMock())
-    @mock.patch.object(azure_helper.tempfile, 'mkdtemp')
+    @mock.patch.object(azure_helper.temp_utils, 'mkdtemp')
     def test_openssl_manager_creates_a_tmpdir(self, mkdtemp):
         manager = azure_helper.OpenSSLManager()
         self.assertEqual(mkdtemp.return_value, manager.tmpdir)
@@ -292,7 +279,7 @@ class TestOpenSSLManager(TestCase):
         manager.clean_up()
 
     @mock.patch.object(azure_helper, 'cd', mock.MagicMock())
-    @mock.patch.object(azure_helper.tempfile, 'mkdtemp', mock.MagicMock())
+    @mock.patch.object(azure_helper.temp_utils, 'mkdtemp', mock.MagicMock())
     @mock.patch.object(azure_helper.util, 'del_dir')
     def test_clean_up(self, del_dir):
         manager = azure_helper.OpenSSLManager()
@@ -300,7 +287,7 @@ class TestOpenSSLManager(TestCase):
         self.assertEqual([mock.call(manager.tmpdir)], del_dir.call_args_list)
 
 
-class TestWALinuxAgentShim(TestCase):
+class TestWALinuxAgentShim(CiTestCase):
 
     def setUp(self):
         super(TestWALinuxAgentShim, self).setUp()
@@ -310,8 +297,7 @@ class TestWALinuxAgentShim(TestCase):
         self.AzureEndpointHttpClient = patches.enter_context(
             mock.patch.object(azure_helper, 'AzureEndpointHttpClient'))
         self.find_endpoint = patches.enter_context(
-            mock.patch.object(
-                azure_helper.WALinuxAgentShim, 'find_endpoint'))
+            mock.patch.object(wa_shim, 'find_endpoint'))
         self.GoalState = patches.enter_context(
             mock.patch.object(azure_helper, 'GoalState'))
         self.OpenSSLManager = patches.enter_context(
@@ -320,7 +306,7 @@ class TestWALinuxAgentShim(TestCase):
             mock.patch.object(azure_helper.time, 'sleep', mock.MagicMock()))
 
     def test_http_client_uses_certificate(self):
-        shim = azure_helper.WALinuxAgentShim()
+        shim = wa_shim()
         shim.register_with_azure_and_fetch_data()
         self.assertEqual(
             [mock.call(self.OpenSSLManager.return_value.certificate)],
@@ -328,7 +314,7 @@ class TestWALinuxAgentShim(TestCase):
 
     def test_correct_url_used_for_goalstate(self):
         self.find_endpoint.return_value = 'test_endpoint'
-        shim = azure_helper.WALinuxAgentShim()
+        shim = wa_shim()
         shim.register_with_azure_and_fetch_data()
         get = self.AzureEndpointHttpClient.return_value.get
         self.assertEqual(
@@ -340,7 +326,7 @@ class TestWALinuxAgentShim(TestCase):
             self.GoalState.call_args_list)
 
     def test_certificates_used_to_determine_public_keys(self):
-        shim = azure_helper.WALinuxAgentShim()
+        shim = wa_shim()
         data = shim.register_with_azure_and_fetch_data()
         self.assertEqual(
             [mock.call(self.GoalState.return_value.certificates_xml)],
@@ -351,13 +337,13 @@ class TestWALinuxAgentShim(TestCase):
 
     def test_absent_certificates_produces_empty_public_keys(self):
         self.GoalState.return_value.certificates_xml = None
-        shim = azure_helper.WALinuxAgentShim()
+        shim = wa_shim()
         data = shim.register_with_azure_and_fetch_data()
         self.assertEqual([], data['public-keys'])
 
     def test_correct_url_used_for_report_ready(self):
         self.find_endpoint.return_value = 'test_endpoint'
-        shim = azure_helper.WALinuxAgentShim()
+        shim = wa_shim()
         shim.register_with_azure_and_fetch_data()
         expected_url = 'http://test_endpoint/machine?comp=health'
         self.assertEqual(
@@ -368,7 +354,7 @@ class TestWALinuxAgentShim(TestCase):
         self.GoalState.return_value.incarnation = 'TestIncarnation'
         self.GoalState.return_value.container_id = 'TestContainerId'
         self.GoalState.return_value.instance_id = 'TestInstanceId'
-        shim = azure_helper.WALinuxAgentShim()
+        shim = wa_shim()
         shim.register_with_azure_and_fetch_data()
         posted_document = (
             self.AzureEndpointHttpClient.return_value.post.call_args[1]['data']
@@ -378,11 +364,11 @@ class TestWALinuxAgentShim(TestCase):
         self.assertIn('TestInstanceId', posted_document)
 
     def test_clean_up_can_be_called_at_any_time(self):
-        shim = azure_helper.WALinuxAgentShim()
+        shim = wa_shim()
         shim.clean_up()
 
     def test_clean_up_will_clean_up_openssl_manager_if_instantiated(self):
-        shim = azure_helper.WALinuxAgentShim()
+        shim = wa_shim()
         shim.register_with_azure_and_fetch_data()
         shim.clean_up()
         self.assertEqual(
@@ -393,12 +379,12 @@ class TestWALinuxAgentShim(TestCase):
             pass
         self.AzureEndpointHttpClient.return_value.get.side_effect = (
             SentinelException)
-        shim = azure_helper.WALinuxAgentShim()
+        shim = wa_shim()
         self.assertRaises(SentinelException,
                           shim.register_with_azure_and_fetch_data)
 
 
-class TestGetMetadataFromFabric(TestCase):
+class TestGetMetadataFromFabric(CiTestCase):
 
     @mock.patch.object(azure_helper, 'WALinuxAgentShim')
     def test_data_from_shim_returned(self, shim):
@@ -422,4 +408,65 @@ class TestGetMetadataFromFabric(TestCase):
                           azure_helper.get_metadata_from_fabric)
         self.assertEqual(1, shim.return_value.clean_up.call_count)
 
+
+class TestExtractIpAddressFromNetworkd(CiTestCase):
+
+    azure_lease = dedent("""\
+    # This is private data. Do not parse.
+    ADDRESS=10.132.0.5
+    NETMASK=255.255.255.255
+    ROUTER=10.132.0.1
+    SERVER_ADDRESS=169.254.169.254
+    NEXT_SERVER=10.132.0.1
+    MTU=1460
+    T1=43200
+    T2=75600
+    LIFETIME=86400
+    DNS=169.254.169.254
+    NTP=169.254.169.254
+    DOMAINNAME=c.ubuntu-foundations.internal
+    DOMAIN_SEARCH_LIST=c.ubuntu-foundations.internal google.internal
+    HOSTNAME=tribaal-test-171002-1349.c.ubuntu-foundations.internal
+    ROUTES=10.132.0.1/32,0.0.0.0 0.0.0.0/0,10.132.0.1
+    CLIENTID=ff405663a200020000ab11332859494d7a8b4c
+    OPTION_245=624c3620
+    """)
+
+    def setUp(self):
+        super(TestExtractIpAddressFromNetworkd, self).setUp()
+        self.lease_d = self.tmp_dir()
+
+    def test_no_valid_leases_is_none(self):
+        """No valid leases should return None."""
+        self.assertIsNone(
+            wa_shim._networkd_get_value_from_leases(self.lease_d))
+
+    def test_option_245_is_found_in_single(self):
+        """A single valid lease with 245 option should return it."""
+        populate_dir(self.lease_d, {'9': self.azure_lease})
+        self.assertEqual(
+            '624c3620', wa_shim._networkd_get_value_from_leases(self.lease_d))
+
+    def test_option_245_not_found_returns_None(self):
+        """A valid lease, but no option 245 should return None."""
+        populate_dir(
+            self.lease_d,
+            {'9': self.azure_lease.replace("OPTION_245", "OPTION_999")})
+        self.assertIsNone(
+            wa_shim._networkd_get_value_from_leases(self.lease_d))
+
+    def test_multiple_returns_first(self):
+        """Somewhat arbitrarily return the first address when multiple.
+
+        Most important at the moment is that this is consistent behavior
+        rather than changing randomly as in order of a dictionary."""
+        myval = "624c3601"
+        populate_dir(
+            self.lease_d,
+            {'9': self.azure_lease,
+             '2': self.azure_lease.replace("624c3620", myval)})
+        self.assertEqual(
+            myval, wa_shim._networkd_get_value_from_leases(self.lease_d))
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_cloudsigma.py b/tests/unittests/test_datasource/test_cloudsigma.py
index 5997102..e4c5990 100644
--- a/tests/unittests/test_datasource/test_cloudsigma.py
+++ b/tests/unittests/test_datasource/test_cloudsigma.py
@@ -6,7 +6,7 @@ from cloudinit.cs_utils import Cepko
 from cloudinit import sources
 from cloudinit.sources import DataSourceCloudSigma
 
-from .. import helpers as test_helpers
+from cloudinit.tests import helpers as test_helpers
 
 SERVER_CONTEXT = {
     "cpu": 1000,
diff --git a/tests/unittests/test_datasource/test_cloudstack.py b/tests/unittests/test_datasource/test_cloudstack.py
index e94aad6..96144b6 100644
--- a/tests/unittests/test_datasource/test_cloudstack.py
+++ b/tests/unittests/test_datasource/test_cloudstack.py
@@ -1,12 +1,17 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 from cloudinit import helpers
-from cloudinit.sources.DataSourceCloudStack import DataSourceCloudStack
+from cloudinit import util
+from cloudinit.sources.DataSourceCloudStack import (
+    DataSourceCloudStack, get_latest_lease)
 
-from ..helpers import TestCase, mock, ExitStack
+from cloudinit.tests.helpers import CiTestCase, ExitStack, mock
 
+import os
+import time
 
-class TestCloudStackPasswordFetching(TestCase):
+
+class TestCloudStackPasswordFetching(CiTestCase):
 
     def setUp(self):
         super(TestCloudStackPasswordFetching, self).setUp()
@@ -18,13 +23,16 @@ class TestCloudStackPasswordFetching(TestCase):
         default_gw = "192.201.20.0"
         get_latest_lease = mock.MagicMock(return_value=None)
         self.patches.enter_context(mock.patch(
-            'cloudinit.sources.DataSourceCloudStack.get_latest_lease',
-            get_latest_lease))
+            mod_name + '.get_latest_lease', get_latest_lease))
 
         get_default_gw = mock.MagicMock(return_value=default_gw)
         self.patches.enter_context(mock.patch(
-            'cloudinit.sources.DataSourceCloudStack.get_default_gateway',
-            get_default_gw))
+            mod_name + '.get_default_gateway', get_default_gw))
+
+        get_networkd_server_address = mock.MagicMock(return_value=None)
+        self.patches.enter_context(mock.patch(
+            mod_name + '.dhcp.networkd_get_option_from_leases',
+            get_networkd_server_address))
 
     def _set_password_server_response(self, response_string):
         subp = mock.MagicMock(return_value=(response_string, ''))
@@ -89,4 +97,72 @@ class TestCloudStackPasswordFetching(TestCase):
     def test_password_not_saved_if_bad_request(self):
         self._check_password_not_saved_for('bad_request')
 
+
+class TestGetLatestLease(CiTestCase):
+
+    def _populate_dir_list(self, bdir, files):
+        """populate_dir_list([(name, data), (name, data)])
+
+        writes files to bdir, and updates timestamps to ensure
+        that their mtime increases with each file."""
+
+        start = int(time.time())
+        for num, fname in enumerate(reversed(files)):
+            fpath = os.path.sep.join((bdir, fname))
+            util.write_file(fpath, fname.encode())
+            os.utime(fpath, (start - num, start - num))
+
+    def _pop_and_test(self, files, expected):
+        lease_d = self.tmp_dir()
+        self._populate_dir_list(lease_d, files)
+        self.assertEqual(self.tmp_path(expected, lease_d),
+                         get_latest_lease(lease_d))
+
+    def test_skips_dhcpv6_files(self):
+        """files started with dhclient6 should be skipped."""
+        expected = "dhclient.lease"
+        self._pop_and_test([expected, "dhclient6.lease"], expected)
+
+    def test_selects_dhclient_dot_files(self):
+        """files named dhclient.lease or dhclient.leases should be used.
+
+        Ubuntu names files dhclient.eth0.leases dhclient6.leases and
+        sometimes dhclient.leases."""
+        self._pop_and_test(["dhclient.lease"], "dhclient.lease")
+        self._pop_and_test(["dhclient.leases"], "dhclient.leases")
+
+    def test_selects_dhclient_dash_files(self):
+        """files named dhclient-lease or dhclient-leases should be used.
+
+        Redhat/Centos names files with dhclient--eth0.lease (centos 7) or
+        dhclient-eth0.leases (centos 6).
+        """
+        self._pop_and_test(["dhclient-eth0.lease"], "dhclient-eth0.lease")
+        self._pop_and_test(["dhclient--eth0.lease"], "dhclient--eth0.lease")
+
+    def test_ignores_by_extension(self):
+        """only .lease or .leases file should be considered."""
+
+        self._pop_and_test(["dhclient.lease", "dhclient.lease.bk",
+                            "dhclient.lease-old", "dhclient.leaselease"],
+                           "dhclient.lease")
+
+    def test_selects_newest_matching(self):
+        """If multiple files match, the newest written should be used."""
+        lease_d = self.tmp_dir()
+        valid_1 = "dhclient.leases"
+        valid_2 = "dhclient.lease"
+        valid_1_path = self.tmp_path(valid_1, lease_d)
+        valid_2_path = self.tmp_path(valid_2, lease_d)
+
+        self._populate_dir_list(lease_d, [valid_1, valid_2])
+        self.assertEqual(valid_2_path, get_latest_lease(lease_d))
+
+        # now update mtime on valid_2 to be older than valid_1 and re-check.
+        mtime = int(os.path.getmtime(valid_1_path)) - 1
+        os.utime(valid_2_path, (mtime, mtime))
+
+        self.assertEqual(valid_1_path, get_latest_lease(lease_d))
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_common.py b/tests/unittests/test_datasource/test_common.py
index 413e87a..80b9c65 100644
--- a/tests/unittests/test_datasource/test_common.py
+++ b/tests/unittests/test_datasource/test_common.py
@@ -24,7 +24,7 @@ from cloudinit.sources import (
 )
 from cloudinit.sources import DataSourceNone as DSNone
 
-from .. import helpers as test_helpers
+from cloudinit.tests import helpers as test_helpers
 
 DEFAULT_LOCAL = [
     Azure.DataSourceAzure,
@@ -35,6 +35,7 @@ DEFAULT_LOCAL = [
     OpenNebula.DataSourceOpenNebula,
     OVF.DataSourceOVF,
     SmartOS.DataSourceSmartOS,
+    Ec2.DataSourceEc2Local,
 ]
 
 DEFAULT_NETWORK = [
diff --git a/tests/unittests/test_datasource/test_configdrive.py b/tests/unittests/test_datasource/test_configdrive.py
index 337be66..237c189 100644
--- a/tests/unittests/test_datasource/test_configdrive.py
+++ b/tests/unittests/test_datasource/test_configdrive.py
@@ -15,7 +15,7 @@ from cloudinit.sources import DataSourceConfigDrive as ds
 from cloudinit.sources.helpers import openstack
 from cloudinit import util
 
-from ..helpers import TestCase, ExitStack, mock
+from cloudinit.tests.helpers import TestCase, ExitStack, mock
 
 
 PUBKEY = u'ssh-rsa AAAAB3NzaC1....sIkJhq8wdX+4I3A4cYbYP ubuntu@server-460\n'
diff --git a/tests/unittests/test_datasource/test_digitalocean.py b/tests/unittests/test_datasource/test_digitalocean.py
index e97a679..f264f36 100644
--- a/tests/unittests/test_datasource/test_digitalocean.py
+++ b/tests/unittests/test_datasource/test_digitalocean.py
@@ -13,7 +13,7 @@ from cloudinit import settings
 from cloudinit.sources import DataSourceDigitalOcean
 from cloudinit.sources.helpers import digitalocean
 
-from ..helpers import mock, TestCase
+from cloudinit.tests.helpers import mock, TestCase
 
 DO_MULTIPLE_KEYS = ["ssh-rsa AAAAB3NzaC1yc2EAAAA... test1@xxxxx",
                     "ssh-rsa AAAAB3NzaC1yc2EAAAA... test2@xxxxx"]
diff --git a/tests/unittests/test_datasource/test_ec2.py b/tests/unittests/test_datasource/test_ec2.py
index 12230ae..a7301db 100644
--- a/tests/unittests/test_datasource/test_ec2.py
+++ b/tests/unittests/test_datasource/test_ec2.py
@@ -1,42 +1,75 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+import copy
 import httpretty
 import mock
 
-from .. import helpers as test_helpers
 from cloudinit import helpers
 from cloudinit.sources import DataSourceEc2 as ec2
+from cloudinit.tests import helpers as test_helpers
 
 
-# collected from api version 2009-04-04/ with
+# collected from api version 2016-09-02/ with
 # python3 -c 'import json
 # from cloudinit.ec2_utils import get_instance_metadata as gm
-# print(json.dumps(gm("2009-04-04"), indent=1, sort_keys=True))'
+# print(json.dumps(gm("2016-09-02"), indent=1, sort_keys=True))'
 DEFAULT_METADATA = {
-    "ami-id": "ami-80861296",
+    "ami-id": "ami-8b92b4ee",
     "ami-launch-index": "0",
     "ami-manifest-path": "(unknown)",
     "block-device-mapping": {"ami": "/dev/sda1", "root": "/dev/sda1"},
-    "hostname": "ip-10-0-0-149",
+    "hostname": "ip-172-31-31-158.us-east-2.compute.internal",
     "instance-action": "none",
-    "instance-id": "i-0052913950685138c",
-    "instance-type": "t2.micro",
-    "local-hostname": "ip-10-0-0-149",
-    "local-ipv4": "10.0.0.149",
-    "placement": {"availability-zone": "us-east-1b"},
+    "instance-id": "i-0a33f80f09c96477f",
+    "instance-type": "t2.small",
+    "local-hostname": "ip-172-3-3-15.us-east-2.compute.internal",
+    "local-ipv4": "172.3.3.15",
+    "mac": "06:17:04:d7:26:09",
+    "metrics": {"vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"},
+    "network": {
+        "interfaces": {
+            "macs": {
+                "06:17:04:d7:26:09": {
+                    "device-number": "0",
+                    "interface-id": "eni-e44ef49e",
+                    "ipv4-associations": {"13.59.77.202": "172.3.3.15"},
+                    "ipv6s": "2600:1f16:aeb:b20b:9d87:a4af:5cc9:73dc",
+                    "local-hostname": ("ip-172-3-3-15.us-east-2."
+                                       "compute.internal"),
+                    "local-ipv4s": "172.3.3.15",
+                    "mac": "06:17:04:d7:26:09",
+                    "owner-id": "950047163771",
+                    "public-hostname": ("ec2-13-59-77-202.us-east-2."
+                                        "compute.amazonaws.com"),
+                    "public-ipv4s": "13.59.77.202",
+                    "security-group-ids": "sg-5a61d333",
+                    "security-groups": "wide-open",
+                    "subnet-id": "subnet-20b8565b",
+                    "subnet-ipv4-cidr-block": "172.31.16.0/20",
+                    "subnet-ipv6-cidr-blocks": "2600:1f16:aeb:b20b::/64",
+                    "vpc-id": "vpc-87e72bee",
+                    "vpc-ipv4-cidr-block": "172.31.0.0/16",
+                    "vpc-ipv4-cidr-blocks": "172.31.0.0/16",
+                    "vpc-ipv6-cidr-blocks": "2600:1f16:aeb:b200::/56"
+                }
+            }
+        }
+    },
+    "placement": {"availability-zone": "us-east-2b"},
     "profile": "default-hvm",
-    "public-hostname": "",
-    "public-ipv4": "107.23.188.247",
+    "public-hostname": "ec2-13-59-77-202.us-east-2.compute.amazonaws.com",
+    "public-ipv4": "13.59.77.202",
     "public-keys": {"brickies": ["ssh-rsa AAAAB3Nz....w== brickies"]},
-    "reservation-id": "r-00a2c173fb5782a08",
-    "security-groups": "wide-open"
+    "reservation-id": "r-01efbc9996bac1bd6",
+    "security-groups": "my-wide-open",
+    "services": {"domain": "amazonaws.com", "partition": "aws"}
 }
 
 
 def _register_ssh_keys(rfunc, base_url, keys_data):
     """handle ssh key inconsistencies.
 
-    public-keys in the ec2 metadata is inconsistently formatted compared
+    public-keys in the ec2 metadata is inconsistently formated compared
     to other entries.
     Given keys_data of {name1: pubkey1, name2: pubkey2}
 
@@ -83,6 +116,9 @@ def register_mock_metaserver(base_url, data):
     In the index, references to lists or dictionaries have a trailing /.
     """
     def register_helper(register, base_url, body):
+        if not isinstance(base_url, str):
+            register(base_url, body)
+            return
         base_url = base_url.rstrip("/")
         if isinstance(body, str):
             register(base_url, body)
@@ -105,7 +141,7 @@ def register_mock_metaserver(base_url, data):
             register(base_url, '\n'.join(vals) + '\n')
             register(base_url + '/', '\n'.join(vals) + '\n')
         elif body is None:
-            register(base_url, 'not found', status_code=404)
+            register(base_url, 'not found', status=404)
 
     def myreg(*argc, **kwargs):
         # print("register_url(%s, %s)" % (argc, kwargs))
@@ -115,6 +151,8 @@ def register_mock_metaserver(base_url, data):
 
 
 class TestEc2(test_helpers.HttprettyTestCase):
+    with_logs = True
+
     valid_platform_data = {
         'uuid': 'ec212f79-87d1-2f1d-588f-d86dc0fd5412',
         'uuid_source': 'dmi',
@@ -123,48 +161,91 @@ class TestEc2(test_helpers.HttprettyTestCase):
 
     def setUp(self):
         super(TestEc2, self).setUp()
-        self.metadata_addr = ec2.DataSourceEc2.metadata_urls[0]
-        self.api_ver = '2009-04-04'
-
-    @property
-    def metadata_url(self):
-        return '/'.join([self.metadata_addr, self.api_ver, 'meta-data', ''])
+        self.datasource = ec2.DataSourceEc2
+        self.metadata_addr = self.datasource.metadata_urls[0]
 
-    @property
-    def userdata_url(self):
-        return '/'.join([self.metadata_addr, self.api_ver, 'user-data'])
+    def data_url(self, version):
+        """Return a metadata url based on the version provided."""
+        return '/'.join([self.metadata_addr, version, 'meta-data', ''])
 
     def _patch_add_cleanup(self, mpath, *args, **kwargs):
         p = mock.patch(mpath, *args, **kwargs)
         p.start()
         self.addCleanup(p.stop)
 
-    def _setup_ds(self, sys_cfg, platform_data, md, ud=None):
+    def _setup_ds(self, sys_cfg, platform_data, md, md_version=None):
+        self.uris = []
         distro = {}
         paths = helpers.Paths({})
         if sys_cfg is None:
             sys_cfg = {}
-        ds = ec2.DataSourceEc2(sys_cfg=sys_cfg, distro=distro, paths=paths)
+        ds = self.datasource(sys_cfg=sys_cfg, distro=distro, paths=paths)
+        if not md_version:
+            md_version = ds.min_metadata_version
         if platform_data is not None:
             self._patch_add_cleanup(
                 "cloudinit.sources.DataSourceEc2._collect_platform_data",
                 return_value=platform_data)
 
         if md:
-            register_mock_metaserver(self.metadata_url, md)
-            register_mock_metaserver(self.userdata_url, ud)
-
+            httpretty.HTTPretty.allow_net_connect = False
+            all_versions = (
+                [ds.min_metadata_version] + ds.extended_metadata_versions)
+            for version in all_versions:
+                metadata_url = self.data_url(version)
+                if version == md_version:
+                    # Register all metadata for desired version
+                    register_mock_metaserver(metadata_url, md)
+                else:
+                    instance_id_url = metadata_url + 'instance-id'
+                    if version == ds.min_metadata_version:
+                        # Add min_metadata_version service availability check
+                        register_mock_metaserver(
+                            instance_id_url, DEFAULT_METADATA['instance-id'])
+                    else:
+                        # Register 404s for all unrequested extended versions
+                        register_mock_metaserver(instance_id_url, None)
         return ds
 
     @httpretty.activate
-    def test_valid_platform_with_strict_true(self):
+    def test_network_config_property_returns_version_1_network_data(self):
+        """network_config property returns network version 1 for metadata."""
+        ds = self._setup_ds(
+            platform_data=self.valid_platform_data,
+            sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
+            md=DEFAULT_METADATA)
+        ds.get_data()
+        mac1 = '06:17:04:d7:26:09'  # Defined in DEFAULT_METADATA
+        expected = {'version': 1, 'config': [
+            {'mac_address': '06:17:04:d7:26:09', 'name': 'eth9',
+             'subnets': [{'type': 'dhcp4'}, {'type': 'dhcp6'}],
+             'type': 'physical'}]}
+        patch_path = (
+            'cloudinit.sources.DataSourceEc2.net.get_interfaces_by_mac')
+        with mock.patch(patch_path) as m_get_interfaces_by_mac:
+            m_get_interfaces_by_mac.return_value = {mac1: 'eth9'}
+            self.assertEqual(expected, ds.network_config)
+
+    def test_network_config_property_is_cached_in_datasource(self):
+        """network_config property is cached in DataSourceEc2."""
+        ds = self._setup_ds(
+            platform_data=self.valid_platform_data,
+            sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
+            md=DEFAULT_METADATA)
+        ds._network_config = {'cached': 'data'}
+        self.assertEqual({'cached': 'data'}, ds.network_config)
+
+    @httpretty.activate
+    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    def test_valid_platform_with_strict_true(self, m_dhcp):
         """Valid platform data should return true with strict_id true."""
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
             md=DEFAULT_METADATA)
         ret = ds.get_data()
-        self.assertEqual(True, ret)
+        self.assertTrue(ret)
+        self.assertEqual(0, m_dhcp.call_count)
 
     @httpretty.activate
     def test_valid_platform_with_strict_false(self):
@@ -174,7 +255,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
             sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
             md=DEFAULT_METADATA)
         ret = ds.get_data()
-        self.assertEqual(True, ret)
+        self.assertTrue(ret)
 
     @httpretty.activate
     def test_unknown_platform_with_strict_true(self):
@@ -185,7 +266,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
             sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
             md=DEFAULT_METADATA)
         ret = ds.get_data()
-        self.assertEqual(False, ret)
+        self.assertFalse(ret)
 
     @httpretty.activate
     def test_unknown_platform_with_strict_false(self):
@@ -196,7 +277,146 @@ class TestEc2(test_helpers.HttprettyTestCase):
             sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
             md=DEFAULT_METADATA)
         ret = ds.get_data()
-        self.assertEqual(True, ret)
+        self.assertTrue(ret)
+
+    def test_ec2_local_returns_false_on_non_aws(self):
+        """DataSourceEc2Local returns False when platform is not AWS."""
+        self.datasource = ec2.DataSourceEc2Local
+        ds = self._setup_ds(
+            platform_data=self.valid_platform_data,
+            sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
+            md=DEFAULT_METADATA)
+        platform_attrs = [
+            attr for attr in ec2.Platforms.__dict__.keys()
+            if not attr.startswith('__')]
+        for attr_name in platform_attrs:
+            platform_name = getattr(ec2.Platforms, attr_name)
+            if platform_name != 'AWS':
+                ds._cloud_platform = platform_name
+                ret = ds.get_data()
+                self.assertFalse(ret)
+                message = (
+                    "Local Ec2 mode only supported on ('AWS',),"
+                    ' not {0}'.format(platform_name))
+                self.assertIn(message, self.logs.getvalue())
+
+    @httpretty.activate
+    @mock.patch('cloudinit.sources.DataSourceEc2.util.is_FreeBSD')
+    def test_ec2_local_returns_false_on_bsd(self, m_is_freebsd):
+        """DataSourceEc2Local returns False on BSD.
+
+        FreeBSD dhclient doesn't support dhclient -sf to run in a sandbox.
+        """
+        m_is_freebsd.return_value = True
+        self.datasource = ec2.DataSourceEc2Local
+        ds = self._setup_ds(
+            platform_data=self.valid_platform_data,
+            sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
+            md=DEFAULT_METADATA)
+        ret = ds.get_data()
+        self.assertFalse(ret)
+        self.assertIn(
+            "FreeBSD doesn't support running dhclient with -sf",
+            self.logs.getvalue())
+
+    @httpretty.activate
+    @mock.patch('cloudinit.net.EphemeralIPv4Network')
+    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    @mock.patch('cloudinit.sources.DataSourceEc2.util.is_FreeBSD')
+    def test_ec2_local_performs_dhcp_on_non_bsd(self, m_is_bsd, m_dhcp, m_net):
+        """Ec2Local returns True for valid platform data on non-BSD with dhcp.
+
+        DataSourceEc2Local will setup initial IPv4 network via dhcp discovery.
+        Then the metadata services is crawled for more network config info.
+        When the platform data is valid, return True.
+        """
+
+        m_is_bsd.return_value = False
+        m_dhcp.return_value = [{
+            'interface': 'eth9', 'fixed-address': '192.168.2.9',
+            'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
+            'broadcast-address': '192.168.2.255'}]
+        self.datasource = ec2.DataSourceEc2Local
+        ds = self._setup_ds(
+            platform_data=self.valid_platform_data,
+            sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
+            md=DEFAULT_METADATA)
+
+        ret = ds.get_data()
+        self.assertTrue(ret)
+        m_dhcp.assert_called_once_with()
+        m_net.assert_called_once_with(
+            broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
+            prefix_or_mask='255.255.255.0', router='192.168.2.1')
+        self.assertIn('Crawl of metadata service took', self.logs.getvalue())
+
+
+class TestConvertEc2MetadataNetworkConfig(test_helpers.CiTestCase):
+
+    def setUp(self):
+        super(TestConvertEc2MetadataNetworkConfig, self).setUp()
+        self.mac1 = '06:17:04:d7:26:09'
+        self.network_metadata = {
+            'interfaces': {'macs': {
+                self.mac1: {'public-ipv4s': '172.31.2.16'}}}}
+
+    def test_convert_ec2_metadata_network_config_skips_absent_macs(self):
+        """Any mac absent from metadata is skipped by network config."""
+        macs_to_nics = {self.mac1: 'eth9', 'DE:AD:BE:EF:FF:FF': 'vitualnic2'}
+
+        # DE:AD:BE:EF:FF:FF represented by OS but not in metadata
+        expected = {'version': 1, 'config': [
+            {'mac_address': self.mac1, 'type': 'physical',
+             'name': 'eth9', 'subnets': [{'type': 'dhcp4'}]}]}
+        self.assertEqual(
+            expected,
+            ec2.convert_ec2_metadata_network_config(
+                self.network_metadata, macs_to_nics))
+
+    def test_convert_ec2_metadata_network_config_handles_only_dhcp6(self):
+        """Config dhcp6 when ipv6s is in metadata for a mac."""
+        macs_to_nics = {self.mac1: 'eth9'}
+        network_metadata_ipv6 = copy.deepcopy(self.network_metadata)
+        nic1_metadata = (
+            network_metadata_ipv6['interfaces']['macs'][self.mac1])
+        nic1_metadata['ipv6s'] = '2620:0:1009:fd00:e442:c88d:c04d:dc85/64'
+        nic1_metadata.pop('public-ipv4s')
+        expected = {'version': 1, 'config': [
+            {'mac_address': self.mac1, 'type': 'physical',
+             'name': 'eth9', 'subnets': [{'type': 'dhcp6'}]}]}
+        self.assertEqual(
+            expected,
+            ec2.convert_ec2_metadata_network_config(
+                network_metadata_ipv6, macs_to_nics))
+
+    def test_convert_ec2_metadata_network_config_handles_dhcp4_and_dhcp6(self):
+        """Config both dhcp4 and dhcp6 when both vpc-ipv6 and ipv4 exists."""
+        macs_to_nics = {self.mac1: 'eth9'}
+        network_metadata_both = copy.deepcopy(self.network_metadata)
+        nic1_metadata = (
+            network_metadata_both['interfaces']['macs'][self.mac1])
+        nic1_metadata['ipv6s'] = '2620:0:1009:fd00:e442:c88d:c04d:dc85/64'
+        expected = {'version': 1, 'config': [
+            {'mac_address': self.mac1, 'type': 'physical',
+             'name': 'eth9',
+             'subnets': [{'type': 'dhcp4'}, {'type': 'dhcp6'}]}]}
+        self.assertEqual(
+            expected,
+            ec2.convert_ec2_metadata_network_config(
+                network_metadata_both, macs_to_nics))
 
+    def test_convert_ec2_metadata_gets_macs_from_get_interfaces_by_mac(self):
+        """Convert Ec2 Metadata calls get_interfaces_by_mac by default."""
+        expected = {'version': 1, 'config': [
+            {'mac_address': self.mac1, 'type': 'physical',
+             'name': 'eth9',
+             'subnets': [{'type': 'dhcp4'}]}]}
+        patch_path = (
+            'cloudinit.sources.DataSourceEc2.net.get_interfaces_by_mac')
+        with mock.patch(patch_path) as m_get_interfaces_by_mac:
+            m_get_interfaces_by_mac.return_value = {self.mac1: 'eth9'}
+            self.assertEqual(
+                expected,
+                ec2.convert_ec2_metadata_network_config(self.network_metadata))
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_gce.py b/tests/unittests/test_datasource/test_gce.py
index ad608be..d399ae7 100644
--- a/tests/unittests/test_datasource/test_gce.py
+++ b/tests/unittests/test_datasource/test_gce.py
@@ -15,7 +15,7 @@ from cloudinit import helpers
 from cloudinit import settings
 from cloudinit.sources import DataSourceGCE
 
-from .. import helpers as test_helpers
+from cloudinit.tests import helpers as test_helpers
 
 
 GCE_META = {
@@ -23,7 +23,8 @@ GCE_META = {
     'instance/zone': 'foo/bar',
     'project/attributes/sshKeys': 'user:ssh-rsa AA2..+aRD0fyVw== root@server',
     'instance/hostname': 'server.project-foo.local',
-    'instance/attributes/user-data': b'/bin/echo foo\n',
+    # UnicodeDecodeError below if set to ds.userdata instead of userdata_raw
+    'instance/attributes/user-data': b'/bin/echo \xff\n',
 }
 
 GCE_META_PARTIAL = {
diff --git a/tests/unittests/test_datasource/test_maas.py b/tests/unittests/test_datasource/test_maas.py
index c1911bf..289c6a4 100644
--- a/tests/unittests/test_datasource/test_maas.py
+++ b/tests/unittests/test_datasource/test_maas.py
@@ -8,7 +8,7 @@ import yaml
 
 from cloudinit.sources import DataSourceMAAS
 from cloudinit import url_helper
-from ..helpers import TestCase, populate_dir
+from cloudinit.tests.helpers import TestCase, populate_dir
 
 try:
     from unittest import mock
diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py
index ff29439..fea9156 100644
--- a/tests/unittests/test_datasource/test_nocloud.py
+++ b/tests/unittests/test_datasource/test_nocloud.py
@@ -3,7 +3,7 @@
 from cloudinit import helpers
 from cloudinit.sources import DataSourceNoCloud
 from cloudinit import util
-from ..helpers import TestCase, populate_dir, mock, ExitStack
+from cloudinit.tests.helpers import TestCase, populate_dir, mock, ExitStack
 
 import os
 import shutil
diff --git a/tests/unittests/test_datasource/test_opennebula.py b/tests/unittests/test_datasource/test_opennebula.py
index b0f8e43..e7d5569 100644
--- a/tests/unittests/test_datasource/test_opennebula.py
+++ b/tests/unittests/test_datasource/test_opennebula.py
@@ -3,7 +3,7 @@
 from cloudinit import helpers
 from cloudinit.sources import DataSourceOpenNebula as ds
 from cloudinit import util
-from ..helpers import mock, populate_dir, TestCase
+from cloudinit.tests.helpers import mock, populate_dir, TestCase
 
 import os
 import pwd
diff --git a/tests/unittests/test_datasource/test_openstack.py b/tests/unittests/test_datasource/test_openstack.py
index c2905d1..ed367e0 100644
--- a/tests/unittests/test_datasource/test_openstack.py
+++ b/tests/unittests/test_datasource/test_openstack.py
@@ -9,7 +9,7 @@ import httpretty as hp
 import json
 import re
 
-from .. import helpers as test_helpers
+from cloudinit.tests import helpers as test_helpers
 
 from six.moves.urllib.parse import urlparse
 from six import StringIO
@@ -57,6 +57,8 @@ OS_FILES = {
     'openstack/content/0000': CONTENT_0,
     'openstack/content/0001': CONTENT_1,
     'openstack/latest/meta_data.json': json.dumps(OSTACK_META),
+    'openstack/latest/network_data.json': json.dumps(
+        {'links': [], 'networks': [], 'services': []}),
     'openstack/latest/user_data': USER_DATA,
     'openstack/latest/vendor_data.json': json.dumps(VENDOR_DATA),
 }
@@ -68,6 +70,7 @@ EC2_VERSIONS = [
 ]
 
 
+# TODO _register_uris should leverage test_ec2.register_mock_metaserver.
 def _register_uris(version, ec2_files, ec2_meta, os_files):
     """Registers a set of url patterns into httpretty that will mimic the
     same data returned by the openstack metadata service (and ec2 service)."""
diff --git a/tests/unittests/test_datasource/test_ovf.py b/tests/unittests/test_datasource/test_ovf.py
index 477cf8e..700da86 100644
--- a/tests/unittests/test_datasource/test_ovf.py
+++ b/tests/unittests/test_datasource/test_ovf.py
@@ -5,8 +5,9 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 import base64
+from collections import OrderedDict
 
-from .. import helpers as test_helpers
+from cloudinit.tests import helpers as test_helpers
 
 from cloudinit.sources import DataSourceOVF as dsovf
 
@@ -70,4 +71,167 @@ class TestReadOvfEnv(test_helpers.TestCase):
         self.assertEqual({'password': "passw0rd"}, cfg)
         self.assertIsNone(ud)
 
+
+class TestTransportIso9660(test_helpers.CiTestCase):
+
+    def setUp(self):
+        super(TestTransportIso9660, self).setUp()
+        self.add_patch('cloudinit.util.find_devs_with',
+                       'm_find_devs_with')
+        self.add_patch('cloudinit.util.mounts', 'm_mounts')
+        self.add_patch('cloudinit.util.mount_cb', 'm_mount_cb')
+        self.add_patch('cloudinit.sources.DataSourceOVF.get_ovf_env',
+                       'm_get_ovf_env')
+        self.m_get_ovf_env.return_value = ('myfile', 'mycontent')
+
+    def test_find_already_mounted(self):
+        """Check we call get_ovf_env from on matching mounted devices"""
+        mounts = {
+            '/dev/sr9': {
+                'fstype': 'iso9660',
+                'mountpoint': 'wark/media/sr9',
+                'opts': 'ro',
+            }
+        }
+        self.m_mounts.return_value = mounts
+
+        (contents, fullp, fname) = dsovf.transport_iso9660()
+        self.assertEqual("mycontent", contents)
+        self.assertEqual("/dev/sr9", fullp)
+        self.assertEqual("myfile", fname)
+
+    def test_find_already_mounted_skips_non_iso9660(self):
+        """Check we call get_ovf_env ignoring non iso9660"""
+        mounts = {
+            '/dev/xvdb': {
+                'fstype': 'vfat',
+                'mountpoint': 'wark/foobar',
+                'opts': 'defaults,noatime',
+            },
+            '/dev/xvdc': {
+                'fstype': 'iso9660',
+                'mountpoint': 'wark/media/sr9',
+                'opts': 'ro',
+            }
+        }
+        # We use an OrderedDict here to ensure we check xvdb before xvdc
+        # as we're not mocking the regex matching, however, if we place
+        # an entry in the results then we can be reasonably sure that
+        # we're skipping an entry which fails to match.
+        self.m_mounts.return_value = (
+            OrderedDict(sorted(mounts.items(), key=lambda t: t[0])))
+
+        (contents, fullp, fname) = dsovf.transport_iso9660()
+        self.assertEqual("mycontent", contents)
+        self.assertEqual("/dev/xvdc", fullp)
+        self.assertEqual("myfile", fname)
+
+    def test_find_already_mounted_matches_kname(self):
+        """Check we dont regex match on basename of the device"""
+        mounts = {
+            '/dev/foo/bar/xvdc': {
+                'fstype': 'iso9660',
+                'mountpoint': 'wark/media/sr9',
+                'opts': 'ro',
+            }
+        }
+        # we're skipping an entry which fails to match.
+        self.m_mounts.return_value = mounts
+
+        (contents, fullp, fname) = dsovf.transport_iso9660()
+        self.assertEqual(False, contents)
+        self.assertIsNone(fullp)
+        self.assertIsNone(fname)
+
+    def test_mount_cb_called_on_blkdevs_with_iso9660(self):
+        """Check we call mount_cb on blockdevs with iso9660 only"""
+        self.m_mounts.return_value = {}
+        self.m_find_devs_with.return_value = ['/dev/sr0']
+        self.m_mount_cb.return_value = ("myfile", "mycontent")
+
+        (contents, fullp, fname) = dsovf.transport_iso9660()
+
+        self.m_mount_cb.assert_called_with(
+            "/dev/sr0", dsovf.get_ovf_env, mtype="iso9660")
+        self.assertEqual("mycontent", contents)
+        self.assertEqual("/dev/sr0", fullp)
+        self.assertEqual("myfile", fname)
+
+    def test_mount_cb_called_on_blkdevs_with_iso9660_check_regex(self):
+        """Check we call mount_cb on blockdevs with iso9660 and match regex"""
+        self.m_mounts.return_value = {}
+        self.m_find_devs_with.return_value = [
+            '/dev/abc', '/dev/my-cdrom', '/dev/sr0']
+        self.m_mount_cb.return_value = ("myfile", "mycontent")
+
+        (contents, fullp, fname) = dsovf.transport_iso9660()
+
+        self.m_mount_cb.assert_called_with(
+            "/dev/sr0", dsovf.get_ovf_env, mtype="iso9660")
+        self.assertEqual("mycontent", contents)
+        self.assertEqual("/dev/sr0", fullp)
+        self.assertEqual("myfile", fname)
+
+    def test_mount_cb_not_called_no_matches(self):
+        """Check we don't call mount_cb if nothing matches"""
+        self.m_mounts.return_value = {}
+        self.m_find_devs_with.return_value = ['/dev/vg/myovf']
+
+        (contents, fullp, fname) = dsovf.transport_iso9660()
+
+        self.assertEqual(0, self.m_mount_cb.call_count)
+        self.assertEqual(False, contents)
+        self.assertIsNone(fullp)
+        self.assertIsNone(fname)
+
+    def test_mount_cb_called_require_iso_false(self):
+        """Check we call mount_cb on blockdevs with require_iso=False"""
+        self.m_mounts.return_value = {}
+        self.m_find_devs_with.return_value = ['/dev/xvdz']
+        self.m_mount_cb.return_value = ("myfile", "mycontent")
+
+        (contents, fullp, fname) = dsovf.transport_iso9660(require_iso=False)
+
+        self.m_mount_cb.assert_called_with(
+            "/dev/xvdz", dsovf.get_ovf_env, mtype=None)
+        self.assertEqual("mycontent", contents)
+        self.assertEqual("/dev/xvdz", fullp)
+        self.assertEqual("myfile", fname)
+
+    def test_maybe_cdrom_device_none(self):
+        """Test maybe_cdrom_device returns False for none/empty input"""
+        self.assertFalse(dsovf.maybe_cdrom_device(None))
+        self.assertFalse(dsovf.maybe_cdrom_device(''))
+
+    def test_maybe_cdrom_device_non_string_exception(self):
+        """Test maybe_cdrom_device raises ValueError on non-string types"""
+        with self.assertRaises(ValueError):
+            dsovf.maybe_cdrom_device({'a': 'eleven'})
+
+    def test_maybe_cdrom_device_false_on_multi_dir_paths(self):
+        """Test maybe_cdrom_device is false on /dev[/.*]/* paths"""
+        self.assertFalse(dsovf.maybe_cdrom_device('/dev/foo/sr0'))
+        self.assertFalse(dsovf.maybe_cdrom_device('foo/sr0'))
+        self.assertFalse(dsovf.maybe_cdrom_device('../foo/sr0'))
+        self.assertFalse(dsovf.maybe_cdrom_device('../foo/sr0'))
+
+    def test_maybe_cdrom_device_true_on_hd_partitions(self):
+        """Test maybe_cdrom_device is false on /dev/hd[a-z][0-9]+ paths"""
+        self.assertTrue(dsovf.maybe_cdrom_device('/dev/hda1'))
+        self.assertTrue(dsovf.maybe_cdrom_device('hdz9'))
+
+    def test_maybe_cdrom_device_true_on_valid_relative_paths(self):
+        """Test maybe_cdrom_device normalizes paths"""
+        self.assertTrue(dsovf.maybe_cdrom_device('/dev/wark/../sr9'))
+        self.assertTrue(dsovf.maybe_cdrom_device('///sr0'))
+        self.assertTrue(dsovf.maybe_cdrom_device('/sr0'))
+        self.assertTrue(dsovf.maybe_cdrom_device('//dev//hda'))
+
+    def test_maybe_cdrom_device_true_on_xvd_partitions(self):
+        """Test maybe_cdrom_device returns true on xvd*"""
+        self.assertTrue(dsovf.maybe_cdrom_device('/dev/xvda'))
+        self.assertTrue(dsovf.maybe_cdrom_device('/dev/xvda1'))
+        self.assertTrue(dsovf.maybe_cdrom_device('xvdza1'))
+
+#
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_scaleway.py b/tests/unittests/test_datasource/test_scaleway.py
index 65d83ad..436df9e 100644
--- a/tests/unittests/test_datasource/test_scaleway.py
+++ b/tests/unittests/test_datasource/test_scaleway.py
@@ -9,7 +9,7 @@ from cloudinit import helpers
 from cloudinit import settings
 from cloudinit.sources import DataSourceScaleway
 
-from ..helpers import mock, HttprettyTestCase, TestCase
+from cloudinit.tests.helpers import mock, HttprettyTestCase, TestCase
 
 
 class DataResponses(object):
diff --git a/tests/unittests/test_datasource/test_smartos.py b/tests/unittests/test_datasource/test_smartos.py
index e3c99bb..933d5b6 100644
--- a/tests/unittests/test_datasource/test_smartos.py
+++ b/tests/unittests/test_datasource/test_smartos.py
@@ -33,7 +33,7 @@ import six
 from cloudinit import helpers as c_helpers
 from cloudinit.util import b64e
 
-from ..helpers import mock, FilesystemMockingTestCase, TestCase
+from cloudinit.tests.helpers import mock, FilesystemMockingTestCase, TestCase
 
 SDC_NICS = json.loads("""
 [
diff --git a/tests/unittests/test_distros/__init__.py b/tests/unittests/test_distros/__init__.py
index e69de29..5394aa5 100644
--- a/tests/unittests/test_distros/__init__.py
+++ b/tests/unittests/test_distros/__init__.py
@@ -0,0 +1,21 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+import copy
+
+from cloudinit import distros
+from cloudinit import helpers
+from cloudinit import settings
+
+
+def _get_distro(dtype, system_info=None):
+    """Return a Distro class of distro 'dtype'.
+
+    cfg is format of CFG_BUILTIN['system_info'].
+
+    example: _get_distro("debian")
+    """
+    if system_info is None:
+        system_info = copy.deepcopy(settings.CFG_BUILTIN['system_info'])
+    system_info['distro'] = dtype
+    paths = helpers.Paths(system_info['paths'])
+    distro_cls = distros.fetch(dtype)
+    return distro_cls(dtype, system_info, paths)
diff --git a/tests/unittests/test_distros/test_arch.py b/tests/unittests/test_distros/test_arch.py
new file mode 100644
index 0000000..a95ba3b
--- /dev/null
+++ b/tests/unittests/test_distros/test_arch.py
@@ -0,0 +1,45 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.distros.arch import _render_network
+from cloudinit import util
+
+from cloudinit.tests.helpers import (CiTestCase, dir2dict)
+
+from . import _get_distro
+
+
+class TestArch(CiTestCase):
+
+    def test_get_distro(self):
+        distro = _get_distro("arch")
+        hostname = "myhostname"
+        hostfile = self.tmp_path("hostfile")
+        distro._write_hostname(hostname, hostfile)
+        self.assertEqual(hostname + "\n", util.load_file(hostfile))
+
+
+class TestRenderNetwork(CiTestCase):
+    def test_basic_static(self):
+        """Just the most basic static config.
+
+        note 'lo' should not be rendered as an interface."""
+        entries = {'eth0': {'auto': True,
+                            'dns-nameservers': ['8.8.8.8'],
+                            'bootproto': 'static',
+                            'address': '10.0.0.2',
+                            'gateway': '10.0.0.1',
+                            'netmask': '255.255.255.0'},
+                   'lo': {'auto': True}}
+        target = self.tmp_dir()
+        devs = _render_network(entries, target=target)
+        files = dir2dict(target, prefix=target)
+        self.assertEqual(['eth0'], devs)
+        self.assertEqual(
+            {'/etc/netctl/eth0': '\n'.join([
+                "Address=10.0.0.2/255.255.255.0",
+                "Connection=ethernet",
+                "DNS=('8.8.8.8')",
+                "Gateway=10.0.0.1",
+                "IP=static",
+                "Interface=eth0", ""]),
+             '/etc/resolv.conf': 'nameserver 8.8.8.8\n'}, files)
diff --git a/tests/unittests/test_distros/test_create_users.py b/tests/unittests/test_distros/test_create_users.py
index 1d02f7b..aa13670 100644
--- a/tests/unittests/test_distros/test_create_users.py
+++ b/tests/unittests/test_distros/test_create_users.py
@@ -1,7 +1,7 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 from cloudinit import distros
-from ..helpers import (TestCase, mock)
+from cloudinit.tests.helpers import (TestCase, mock)
 
 
 class MyBaseDistro(distros.Distro):
diff --git a/tests/unittests/test_distros/test_debian.py b/tests/unittests/test_distros/test_debian.py
index 2330ad5..da16a79 100644
--- a/tests/unittests/test_distros/test_debian.py
+++ b/tests/unittests/test_distros/test_debian.py
@@ -1,67 +1,85 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from ..helpers import (CiTestCase, mock)
-
-from cloudinit.distros.debian import apply_locale
+from cloudinit import distros
 from cloudinit import util
+from cloudinit.tests.helpers import (FilesystemMockingTestCase, mock)
 
 
 @mock.patch("cloudinit.distros.debian.util.subp")
-class TestDebianApplyLocale(CiTestCase):
+class TestDebianApplyLocale(FilesystemMockingTestCase):
+
+    def setUp(self):
+        super(TestDebianApplyLocale, self).setUp()
+        self.new_root = self.tmp_dir()
+        self.patchOS(self.new_root)
+        self.patchUtils(self.new_root)
+        self.spath = self.tmp_path('etc/default/locale', self.new_root)
+        cls = distros.fetch("debian")
+        self.distro = cls("debian", {}, None)
+
     def test_no_rerun(self, m_subp):
         """If system has defined locale, no re-run is expected."""
-        spath = self.tmp_path("default-locale")
         m_subp.return_value = (None, None)
         locale = 'en_US.UTF-8'
-        util.write_file(spath, 'LANG=%s\n' % locale, omode="w")
-        apply_locale(locale, sys_path=spath)
+        util.write_file(self.spath, 'LANG=%s\n' % locale, omode="w")
+        self.distro.apply_locale(locale, out_fn=self.spath)
         m_subp.assert_not_called()
 
+    def test_no_regen_on_c_utf8(self, m_subp):
+        """If locale is set to C.UTF8, do not attempt to call locale-gen"""
+        m_subp.return_value = (None, None)
+        locale = 'C.UTF-8'
+        util.write_file(self.spath, 'LANG=%s\n' % 'en_US.UTF-8', omode="w")
+        self.distro.apply_locale(locale, out_fn=self.spath)
+        self.assertEqual(
+            [['update-locale', '--locale-file=' + self.spath,
+              'LANG=%s' % locale]],
+            [p[0][0] for p in m_subp.call_args_list])
+
     def test_rerun_if_different(self, m_subp):
         """If system has different locale, locale-gen should be called."""
-        spath = self.tmp_path("default-locale")
         m_subp.return_value = (None, None)
         locale = 'en_US.UTF-8'
-        util.write_file(spath, 'LANG=fr_FR.UTF-8', omode="w")
-        apply_locale(locale, sys_path=spath)
+        util.write_file(self.spath, 'LANG=fr_FR.UTF-8', omode="w")
+        self.distro.apply_locale(locale, out_fn=self.spath)
         self.assertEqual(
             [['locale-gen', locale],
-             ['update-locale', '--locale-file=' + spath, 'LANG=%s' % locale]],
+             ['update-locale', '--locale-file=' + self.spath,
+              'LANG=%s' % locale]],
             [p[0][0] for p in m_subp.call_args_list])
 
     def test_rerun_if_no_file(self, m_subp):
         """If system has no locale file, locale-gen should be called."""
-        spath = self.tmp_path("default-locale")
         m_subp.return_value = (None, None)
         locale = 'en_US.UTF-8'
-        apply_locale(locale, sys_path=spath)
+        self.distro.apply_locale(locale, out_fn=self.spath)
         self.assertEqual(
             [['locale-gen', locale],
-             ['update-locale', '--locale-file=' + spath, 'LANG=%s' % locale]],
+             ['update-locale', '--locale-file=' + self.spath,
+              'LANG=%s' % locale]],
             [p[0][0] for p in m_subp.call_args_list])
 
     def test_rerun_on_unset_system_locale(self, m_subp):
         """If system has unset locale, locale-gen should be called."""
         m_subp.return_value = (None, None)
-        spath = self.tmp_path("default-locale")
         locale = 'en_US.UTF-8'
-        util.write_file(spath, 'LANG=', omode="w")
-        apply_locale(locale, sys_path=spath)
+        util.write_file(self.spath, 'LANG=', omode="w")
+        self.distro.apply_locale(locale, out_fn=self.spath)
         self.assertEqual(
             [['locale-gen', locale],
-             ['update-locale', '--locale-file=' + spath, 'LANG=%s' % locale]],
+             ['update-locale', '--locale-file=' + self.spath,
+              'LANG=%s' % locale]],
             [p[0][0] for p in m_subp.call_args_list])
 
     def test_rerun_on_mismatched_keys(self, m_subp):
         """If key is LC_ALL and system has only LANG, rerun is expected."""
         m_subp.return_value = (None, None)
-        spath = self.tmp_path("default-locale")
         locale = 'en_US.UTF-8'
-        util.write_file(spath, 'LANG=', omode="w")
-        apply_locale(locale, sys_path=spath, keyname='LC_ALL')
+        util.write_file(self.spath, 'LANG=', omode="w")
+        self.distro.apply_locale(locale, out_fn=self.spath, keyname='LC_ALL')
         self.assertEqual(
             [['locale-gen', locale],
-             ['update-locale', '--locale-file=' + spath,
+             ['update-locale', '--locale-file=' + self.spath,
               'LC_ALL=%s' % locale]],
             [p[0][0] for p in m_subp.call_args_list])
 
@@ -69,14 +87,14 @@ class TestDebianApplyLocale(CiTestCase):
         """locale as None or "" is invalid and should raise ValueError."""
 
         with self.assertRaises(ValueError) as ctext_m:
-            apply_locale(None)
+            self.distro.apply_locale(None)
             m_subp.assert_not_called()
 
         self.assertEqual(
             'Failed to provide locale value.', str(ctext_m.exception))
 
         with self.assertRaises(ValueError) as ctext_m:
-            apply_locale("")
+            self.distro.apply_locale("")
             m_subp.assert_not_called()
         self.assertEqual(
             'Failed to provide locale value.', str(ctext_m.exception))
diff --git a/tests/unittests/test_distros/test_generic.py b/tests/unittests/test_distros/test_generic.py
index c9be277..791fe61 100644
--- a/tests/unittests/test_distros/test_generic.py
+++ b/tests/unittests/test_distros/test_generic.py
@@ -3,7 +3,7 @@
 from cloudinit import distros
 from cloudinit import util
 
-from .. import helpers
+from cloudinit.tests import helpers
 
 import os
 import shutil
@@ -228,5 +228,21 @@ class TestGenericDistro(helpers.FilesystemMockingTestCase):
         os.symlink('/', '/run/systemd/system')
         self.assertFalse(d.uses_systemd())
 
+    @mock.patch('cloudinit.distros.debian.read_system_locale')
+    def test_get_locale_ubuntu(self, m_locale):
+        """Test ubuntu distro returns locale set to C.UTF-8"""
+        m_locale.return_value = 'C.UTF-8'
+        cls = distros.fetch("ubuntu")
+        d = cls("ubuntu", {}, None)
+        locale = d.get_locale()
+        self.assertEqual('C.UTF-8', locale)
+
+    def test_get_locale_rhel(self):
+        """Test rhel distro returns NotImplementedError exception"""
+        cls = distros.fetch("rhel")
+        d = cls("rhel", {}, None)
+        with self.assertRaises(NotImplementedError):
+            d.get_locale()
+
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_distros/test_netconfig.py b/tests/unittests/test_distros/test_netconfig.py
index 2f505d9..c4bd11b 100644
--- a/tests/unittests/test_distros/test_netconfig.py
+++ b/tests/unittests/test_distros/test_netconfig.py
@@ -12,7 +12,7 @@ try:
 except ImportError:
     from contextlib2 import ExitStack
 
-from ..helpers import TestCase
+from cloudinit.tests.helpers import TestCase
 
 from cloudinit import distros
 from cloudinit.distros.parsers.sys_conf import SysConf
@@ -135,7 +135,7 @@ network:
 V2_NET_CFG = {
     'ethernets': {
         'eth7': {
-            'addresses': ['192.168.1.5/255.255.255.0'],
+            'addresses': ['192.168.1.5/24'],
             'gateway4': '192.168.1.254'},
         'eth9': {
             'dhcp4': True}
@@ -151,7 +151,6 @@ V2_TO_V2_NET_CFG_OUTPUT = """
 # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
 # network: {config: disabled}
 network:
-    version: 2
     ethernets:
         eth7:
             addresses:
@@ -159,6 +158,7 @@ network:
             gateway4: 192.168.1.254
         eth9:
             dhcp4: true
+    version: 2
 """
 
 
diff --git a/tests/unittests/test_distros/test_opensuse.py b/tests/unittests/test_distros/test_opensuse.py
new file mode 100644
index 0000000..b9bb9b3
--- /dev/null
+++ b/tests/unittests/test_distros/test_opensuse.py
@@ -0,0 +1,12 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.tests.helpers import CiTestCase
+
+from . import _get_distro
+
+
+class TestopenSUSE(CiTestCase):
+
+    def test_get_distro(self):
+        distro = _get_distro("opensuse")
+        self.assertEqual(distro.osfamily, 'suse')
diff --git a/tests/unittests/test_distros/test_resolv.py b/tests/unittests/test_distros/test_resolv.py
index 97168cf..68ea008 100644
--- a/tests/unittests/test_distros/test_resolv.py
+++ b/tests/unittests/test_distros/test_resolv.py
@@ -3,7 +3,7 @@
 from cloudinit.distros.parsers import resolv_conf
 from cloudinit.distros import rhel_util
 
-from ..helpers import TestCase
+from cloudinit.tests.helpers import TestCase
 
 import re
 import tempfile
diff --git a/tests/unittests/test_distros/test_sles.py b/tests/unittests/test_distros/test_sles.py
new file mode 100644
index 0000000..33e3c45
--- /dev/null
+++ b/tests/unittests/test_distros/test_sles.py
@@ -0,0 +1,12 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.tests.helpers import CiTestCase
+
+from . import _get_distro
+
+
+class TestSLES(CiTestCase):
+
+    def test_get_distro(self):
+        distro = _get_distro("sles")
+        self.assertEqual(distro.osfamily, 'suse')
diff --git a/tests/unittests/test_distros/test_sysconfig.py b/tests/unittests/test_distros/test_sysconfig.py
index 235eceb..c1d5b69 100644
--- a/tests/unittests/test_distros/test_sysconfig.py
+++ b/tests/unittests/test_distros/test_sysconfig.py
@@ -4,7 +4,7 @@ import re
 
 from cloudinit.distros.parsers.sys_conf import SysConf
 
-from ..helpers import TestCase
+from cloudinit.tests.helpers import TestCase
 
 
 # Lots of good examples @
diff --git a/tests/unittests/test_distros/test_user_data_normalize.py b/tests/unittests/test_distros/test_user_data_normalize.py
index 88746e0..0fa9cdb 100644
--- a/tests/unittests/test_distros/test_user_data_normalize.py
+++ b/tests/unittests/test_distros/test_user_data_normalize.py
@@ -5,7 +5,7 @@ from cloudinit.distros import ug_util
 from cloudinit import helpers
 from cloudinit import settings
 
-from ..helpers import TestCase
+from cloudinit.tests.helpers import TestCase
 import mock
 
 
diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py
index 8ccfe55..1284e75 100644
--- a/tests/unittests/test_ds_identify.py
+++ b/tests/unittests/test_ds_identify.py
@@ -6,10 +6,15 @@ from uuid import uuid4
 
 from cloudinit import safeyaml
 from cloudinit import util
-from .helpers import CiTestCase, dir2dict, json_dumps, populate_dir
+from cloudinit.tests.helpers import (
+    CiTestCase, dir2dict, json_dumps, populate_dir)
 
 UNAME_MYSYS = ("Linux bart 4.4.0-62-generic #83-Ubuntu "
                "SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 GNU/Linux")
+UNAME_PPC64EL = ("Linux diamond 4.4.0-83-generic #106-Ubuntu SMP "
+                 "Mon Jun 26 17:53:54 UTC 2017 "
+                 "ppc64le ppc64le ppc64le GNU/Linux")
+
 BLKID_EFI_ROOT = """
 DEVNAME=/dev/sda1
 UUID=8B36-5390
@@ -22,8 +27,11 @@ TYPE=ext4
 PARTUUID=30c65c77-e07d-4039-b2fb-88b1fb5fa1fc
 """
 
+POLICY_FOUND_ONLY = "search,found=all,maybe=none,notfound=disabled"
+POLICY_FOUND_OR_MAYBE = "search,found=all,maybe=all,notfound=disabled"
 DI_DEFAULT_POLICY = "search,found=all,maybe=all,notfound=enabled"
 DI_DEFAULT_POLICY_NO_DMI = "search,found=all,maybe=all,notfound=disabled"
+DI_EC2_STRICT_ID_DEFAULT = "true"
 
 SHELL_MOCK_TMPL = """\
 %(name)s() {
@@ -47,6 +55,7 @@ P_SEED_DIR = "var/lib/cloud/seed"
 P_DSID_CFG = "etc/cloud/ds-identify.cfg"
 
 MOCK_VIRT_IS_KVM = {'name': 'detect_virt', 'RET': 'kvm', 'ret': 0}
+MOCK_UNAME_IS_PPC64 = {'name': 'uname', 'out': UNAME_PPC64EL, 'ret': 0}
 
 
 class TestDsIdentify(CiTestCase):
@@ -54,7 +63,8 @@ class TestDsIdentify(CiTestCase):
 
     def call(self, rootd=None, mocks=None, args=None, files=None,
              policy_dmi=DI_DEFAULT_POLICY,
-             policy_nodmi=DI_DEFAULT_POLICY_NO_DMI):
+             policy_no_dmi=DI_DEFAULT_POLICY_NO_DMI,
+             ec2_strict_id=DI_EC2_STRICT_ID_DEFAULT):
         if args is None:
             args = []
         if mocks is None:
@@ -80,7 +90,8 @@ class TestDsIdentify(CiTestCase):
             "PATH_ROOT='%s'" % rootd,
             ". " + self.dsid_path,
             'DI_DEFAULT_POLICY="%s"' % policy_dmi,
-            'DI_DEFAULT_POLICY_NO_DMI="%s"' % policy_nodmi,
+            'DI_DEFAULT_POLICY_NO_DMI="%s"' % policy_no_dmi,
+            'DI_EC2_STRICT_ID_DEFAULT="%s"' % ec2_strict_id,
             ""
         ]
 
@@ -136,7 +147,7 @@ class TestDsIdentify(CiTestCase):
     def _call_via_dict(self, data, rootd=None, **kwargs):
         # return output of self.call with a dict input like VALID_CFG[item]
         xwargs = {'rootd': rootd}
-        for k in ('mocks', 'args', 'policy_dmi', 'policy_nodmi', 'files'):
+        for k in ('mocks', 'args', 'policy_dmi', 'policy_no_dmi', 'files'):
             if k in data:
                 xwargs[k] = data[k]
             if k in kwargs:
@@ -260,6 +271,31 @@ class TestDsIdentify(CiTestCase):
         self._check_via_dict(mydata, rc=RC_FOUND, dslist=['AliYun', DS_NONE],
                              policy_dmi=policy)
 
+    def test_default_openstack_intel_is_found(self):
+        """On Intel, openstack must be identified."""
+        self._test_ds_found('OpenStack')
+
+    def test_openstack_on_non_intel_is_maybe(self):
+        """On non-Intel, openstack without dmi info is maybe.
+
+        nova does not identify itself on platforms other than intel.
+           https://bugs.launchpad.net/cloud-init/+bugs?field.tag=dsid-nova""";
+
+        data = VALID_CFG['OpenStack'].copy()
+        del data['files'][P_PRODUCT_NAME]
+        data.update({'policy_dmi': POLICY_FOUND_OR_MAYBE,
+                     'policy_no_dmi': POLICY_FOUND_OR_MAYBE})
+
+        # this should show not found as default uname in tests is intel.
+        # and intel openstack requires positive identification.
+        self._check_via_dict(data, RC_NOT_FOUND, dslist=None)
+
+        # updating the uname to ppc64 though should get a maybe.
+        data.update({'mocks': [MOCK_VIRT_IS_KVM, MOCK_UNAME_IS_PPC64]})
+        (_, _, err, _, _) = self._check_via_dict(
+            data, RC_FOUND, dslist=['OpenStack', 'None'])
+        self.assertIn("check for 'OpenStack' returned maybe", err)
+
 
 def blkid_out(disks=None):
     """Convert a list of disk dictionaries into blkid content."""
@@ -340,6 +376,13 @@ VALID_CFG = {
         'files': {P_PRODUCT_SERIAL: 'GoogleCloud-8f2e88f\n'},
         'mocks': [MOCK_VIRT_IS_KVM],
     },
+    'OpenStack': {
+        'ds': 'OpenStack',
+        'files': {P_PRODUCT_NAME: 'OpenStack Nova\n'},
+        'mocks': [MOCK_VIRT_IS_KVM],
+        'policy_dmi': POLICY_FOUND_ONLY,
+        'policy_no_dmi': POLICY_FOUND_ONLY,
+    },
     'ConfigDrive': {
         'ds': 'ConfigDrive',
         'mocks': [
diff --git a/tests/unittests/test_ec2_util.py b/tests/unittests/test_ec2_util.py
index 65fdb51..af78997 100644
--- a/tests/unittests/test_ec2_util.py
+++ b/tests/unittests/test_ec2_util.py
@@ -2,7 +2,7 @@
 
 import httpretty as hp
 
-from . import helpers
+from cloudinit.tests import helpers
 
 from cloudinit import ec2_utils as eu
 from cloudinit import url_helper as uh
diff --git a/tests/unittests/test_filters/test_launch_index.py b/tests/unittests/test_filters/test_launch_index.py
index 13137f6..6364d38 100644
--- a/tests/unittests/test_filters/test_launch_index.py
+++ b/tests/unittests/test_filters/test_launch_index.py
@@ -2,7 +2,7 @@
 
 import copy
 
-from .. import helpers
+from cloudinit.tests import helpers
 
 from six.moves import filterfalse
 
diff --git a/tests/unittests/test_handler/test_handler_apt_conf_v1.py b/tests/unittests/test_handler/test_handler_apt_conf_v1.py
index 554277f..83f962a 100644
--- a/tests/unittests/test_handler/test_handler_apt_conf_v1.py
+++ b/tests/unittests/test_handler/test_handler_apt_conf_v1.py
@@ -3,7 +3,7 @@
 from cloudinit.config import cc_apt_configure
 from cloudinit import util
 
-from ..helpers import TestCase
+from cloudinit.tests.helpers import TestCase
 
 import copy
 import os
diff --git a/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py b/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py
index f53ddbb..d2b96f0 100644
--- a/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py
+++ b/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py
@@ -24,7 +24,7 @@ from cloudinit.sources import DataSourceNone
 
 from cloudinit.distros.debian import Distro
 
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 LOG = logging.getLogger(__name__)
 
diff --git a/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v3.py b/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v3.py
index 1ca915b..f7608c2 100644
--- a/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v3.py
+++ b/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v3.py
@@ -24,7 +24,7 @@ from cloudinit.sources import DataSourceNone
 
 from cloudinit.distros.debian import Distro
 
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 LOG = logging.getLogger(__name__)
 
diff --git a/tests/unittests/test_handler/test_handler_apt_source_v1.py b/tests/unittests/test_handler/test_handler_apt_source_v1.py
index 12502d0..3a3f95c 100644
--- a/tests/unittests/test_handler/test_handler_apt_source_v1.py
+++ b/tests/unittests/test_handler/test_handler_apt_source_v1.py
@@ -20,7 +20,7 @@ from cloudinit.config import cc_apt_configure
 from cloudinit import gpg
 from cloudinit import util
 
-from ..helpers import TestCase
+from cloudinit.tests.helpers import TestCase
 
 EXPECTEDKEY = """-----BEGIN PGP PUBLIC KEY BLOCK-----
 Version: GnuPG v1
diff --git a/tests/unittests/test_handler/test_handler_apt_source_v3.py b/tests/unittests/test_handler/test_handler_apt_source_v3.py
index 292d3f5..7bb1b7c 100644
--- a/tests/unittests/test_handler/test_handler_apt_source_v3.py
+++ b/tests/unittests/test_handler/test_handler_apt_source_v3.py
@@ -28,7 +28,7 @@ from cloudinit import util
 from cloudinit.config import cc_apt_configure
 from cloudinit.sources import DataSourceNone
 
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 EXPECTEDKEY = u"""-----BEGIN PGP PUBLIC KEY BLOCK-----
 Version: GnuPG v1
diff --git a/tests/unittests/test_handler/test_handler_bootcmd.py b/tests/unittests/test_handler/test_handler_bootcmd.py
new file mode 100644
index 0000000..dbf43e0
--- /dev/null
+++ b/tests/unittests/test_handler/test_handler_bootcmd.py
@@ -0,0 +1,146 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.config import cc_bootcmd
+from cloudinit.sources import DataSourceNone
+from cloudinit import (distros, helpers, cloud, util)
+from cloudinit.tests.helpers import CiTestCase, mock, skipIf
+
+import logging
+import tempfile
+
+try:
+    import jsonschema
+    assert jsonschema  # avoid pyflakes error F401: import unused
+    _missing_jsonschema_dep = False
+except ImportError:
+    _missing_jsonschema_dep = True
+
+LOG = logging.getLogger(__name__)
+
+
+class FakeExtendedTempFile(object):
+    def __init__(self, suffix):
+        self.suffix = suffix
+        self.handle = tempfile.NamedTemporaryFile(
+            prefix="ci-%s." % self.__class__.__name__, delete=False)
+
+    def __enter__(self):
+        return self.handle
+
+    def __exit__(self, exc_type, exc_value, traceback):
+        self.handle.close()
+        util.del_file(self.handle.name)
+
+
+class TestBootcmd(CiTestCase):
+
+    with_logs = True
+
+    _etmpfile_path = ('cloudinit.config.cc_bootcmd.temp_utils.'
+                      'ExtendedTemporaryFile')
+
+    def setUp(self):
+        super(TestBootcmd, self).setUp()
+        self.subp = util.subp
+        self.new_root = self.tmp_dir()
+
+    def _get_cloud(self, distro):
+        paths = helpers.Paths({})
+        cls = distros.fetch(distro)
+        mydist = cls(distro, {}, paths)
+        myds = DataSourceNone.DataSourceNone({}, mydist, paths)
+        paths.datasource = myds
+        return cloud.Cloud(myds, paths, {}, mydist, None)
+
+    def test_handler_skip_if_no_bootcmd(self):
+        """When the provided config doesn't contain bootcmd, skip it."""
+        cfg = {}
+        mycloud = self._get_cloud('ubuntu')
+        cc_bootcmd.handle('notimportant', cfg, mycloud, LOG, None)
+        self.assertIn(
+            "Skipping module named notimportant, no 'bootcmd' key",
+            self.logs.getvalue())
+
+    def test_handler_invalid_command_set(self):
+        """Commands which can't be converted to shell will raise errors."""
+        invalid_config = {'bootcmd': 1}
+        cc = self._get_cloud('ubuntu')
+        with self.assertRaises(TypeError) as context_manager:
+            cc_bootcmd.handle('cc_bootcmd', invalid_config, cc, LOG, [])
+        self.assertIn('Failed to shellify bootcmd', self.logs.getvalue())
+        self.assertEqual(
+            "'int' object is not iterable",
+            str(context_manager.exception))
+
+    @skipIf(_missing_jsonschema_dep, "No python-jsonschema dependency")
+    def test_handler_schema_validation_warns_non_array_type(self):
+        """Schema validation warns of non-array type for bootcmd key.
+
+        Schema validation is not strict, so bootcmd attempts to shellify the
+        invalid content.
+        """
+        invalid_config = {'bootcmd': 1}
+        cc = self._get_cloud('ubuntu')
+        with self.assertRaises(TypeError):
+            cc_bootcmd.handle('cc_bootcmd', invalid_config, cc, LOG, [])
+        self.assertIn(
+            'Invalid config:\nbootcmd: 1 is not of type \'array\'',
+            self.logs.getvalue())
+        self.assertIn('Failed to shellify', self.logs.getvalue())
+
+    @skipIf(_missing_jsonschema_dep, 'No python-jsonschema dependency')
+    def test_handler_schema_validation_warns_non_array_item_type(self):
+        """Schema validation warns of non-array or string bootcmd items.
+
+        Schema validation is not strict, so bootcmd attempts to shellify the
+        invalid content.
+        """
+        invalid_config = {
+            'bootcmd': ['ls /', 20, ['wget', 'http://stuff/blah'], {'a': 'n'}]}
+        cc = self._get_cloud('ubuntu')
+        with self.assertRaises(RuntimeError) as context_manager:
+            cc_bootcmd.handle('cc_bootcmd', invalid_config, cc, LOG, [])
+        expected_warnings = [
+            'bootcmd.1: 20 is not valid under any of the given schemas',
+            'bootcmd.3: {\'a\': \'n\'} is not valid under any of the given'
+            ' schema'
+        ]
+        logs = self.logs.getvalue()
+        for warning in expected_warnings:
+            self.assertIn(warning, logs)
+        self.assertIn('Failed to shellify', logs)
+        self.assertEqual(
+            'Unable to shellify type int which is not a list or string',
+            str(context_manager.exception))
+
+    def test_handler_creates_and_runs_bootcmd_script_with_instance_id(self):
+        """Valid schema runs a bootcmd script with INSTANCE_ID in the env."""
+        cc = self._get_cloud('ubuntu')
+        out_file = self.tmp_path('bootcmd.out', self.new_root)
+        my_id = "b6ea0f59-e27d-49c6-9f87-79f19765a425"
+        valid_config = {'bootcmd': [
+            'echo {0} $INSTANCE_ID > {1}'.format(my_id, out_file)]}
+
+        with mock.patch(self._etmpfile_path, FakeExtendedTempFile):
+            cc_bootcmd.handle('cc_bootcmd', valid_config, cc, LOG, [])
+        self.assertEqual(my_id + ' iid-datasource-none\n',
+                         util.load_file(out_file))
+
+    def test_handler_runs_bootcmd_script_with_error(self):
+        """When a valid script generates an error, that error is raised."""
+        cc = self._get_cloud('ubuntu')
+        valid_config = {'bootcmd': ['exit 1']}  # Script with error
+
+        with mock.patch(self._etmpfile_path, FakeExtendedTempFile):
+            with self.assertRaises(util.ProcessExecutionError) as ctxt_manager:
+                cc_bootcmd.handle('does-not-matter', valid_config, cc, LOG, [])
+        self.assertIn(
+            'Unexpected error while running command.\n'
+            "Command: ['/bin/sh',",
+            str(ctxt_manager.exception))
+        self.assertIn(
+            'Failed to run bootcmd module does-not-matter',
+            self.logs.getvalue())
+
+
+# vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_ca_certs.py b/tests/unittests/test_handler/test_handler_ca_certs.py
index 7cee2c3..06e14db 100644
--- a/tests/unittests/test_handler/test_handler_ca_certs.py
+++ b/tests/unittests/test_handler/test_handler_ca_certs.py
@@ -5,7 +5,7 @@ from cloudinit.config import cc_ca_certs
 from cloudinit import helpers
 from cloudinit import util
 
-from ..helpers import TestCase
+from cloudinit.tests.helpers import TestCase
 
 import logging
 import shutil
diff --git a/tests/unittests/test_handler/test_handler_chef.py b/tests/unittests/test_handler/test_handler_chef.py
index 6a152ea..0136a93 100644
--- a/tests/unittests/test_handler/test_handler_chef.py
+++ b/tests/unittests/test_handler/test_handler_chef.py
@@ -1,11 +1,10 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+import httpretty
 import json
 import logging
 import os
-import shutil
 import six
-import tempfile
 
 from cloudinit import cloud
 from cloudinit.config import cc_chef
@@ -14,18 +13,83 @@ from cloudinit import helpers
 from cloudinit.sources import DataSourceNone
 from cloudinit import util
 
-from .. import helpers as t_help
+from cloudinit.tests.helpers import (
+    CiTestCase, FilesystemMockingTestCase, mock, skipIf)
 
 LOG = logging.getLogger(__name__)
 
 CLIENT_TEMPL = os.path.sep.join(["templates", "chef_client.rb.tmpl"])
 
 
-class TestChef(t_help.FilesystemMockingTestCase):
+class TestInstallChefOmnibus(CiTestCase):
+
+    def setUp(self):
+        self.new_root = self.tmp_dir()
+
+    @httpretty.activate
+    def test_install_chef_from_omnibus_runs_chef_url_content(self):
+        """install_chef_from_omnibus runs downloaded OMNIBUS_URL as script."""
+        chef_outfile = self.tmp_path('chef.out', self.new_root)
+        response = '#!/bin/bash\necho "Hi Mom" > {0}'.format(chef_outfile)
+        httpretty.register_uri(
+            httpretty.GET, cc_chef.OMNIBUS_URL, body=response, status=200)
+        cc_chef.install_chef_from_omnibus()
+        self.assertEqual('Hi Mom\n', util.load_file(chef_outfile))
+
+    @mock.patch('cloudinit.config.cc_chef.url_helper.readurl')
+    @mock.patch('cloudinit.config.cc_chef.util.subp_blob_in_tempfile')
+    def test_install_chef_from_omnibus_retries_url(self, m_subp_blob, m_rdurl):
+        """install_chef_from_omnibus retries OMNIBUS_URL upon failure."""
+
+        class FakeURLResponse(object):
+            contents = '#!/bin/bash\necho "Hi Mom" > {0}/chef.out'.format(
+                self.new_root)
+
+        m_rdurl.return_value = FakeURLResponse()
+
+        cc_chef.install_chef_from_omnibus()
+        expected_kwargs = {'retries': cc_chef.OMNIBUS_URL_RETRIES,
+                           'url': cc_chef.OMNIBUS_URL}
+        self.assertItemsEqual(expected_kwargs, m_rdurl.call_args_list[0][1])
+        cc_chef.install_chef_from_omnibus(retries=10)
+        expected_kwargs = {'retries': 10,
+                           'url': cc_chef.OMNIBUS_URL}
+        self.assertItemsEqual(expected_kwargs, m_rdurl.call_args_list[1][1])
+        expected_subp_kwargs = {
+            'args': ['-v', '2.0'],
+            'basename': 'chef-omnibus-install',
+            'blob': m_rdurl.return_value.contents,
+            'capture': False
+        }
+        self.assertItemsEqual(
+            expected_subp_kwargs,
+            m_subp_blob.call_args_list[0][1])
+
+    @httpretty.activate
+    @mock.patch('cloudinit.config.cc_chef.util.subp_blob_in_tempfile')
+    def test_install_chef_from_omnibus_has_omnibus_version(self, m_subp_blob):
+        """install_chef_from_omnibus provides version arg to OMNIBUS_URL."""
+        chef_outfile = self.tmp_path('chef.out', self.new_root)
+        response = '#!/bin/bash\necho "Hi Mom" > {0}'.format(chef_outfile)
+        httpretty.register_uri(
+            httpretty.GET, cc_chef.OMNIBUS_URL, body=response)
+        cc_chef.install_chef_from_omnibus(omnibus_version='2.0')
+
+        called_kwargs = m_subp_blob.call_args_list[0][1]
+        expected_kwargs = {
+            'args': ['-v', '2.0'],
+            'basename': 'chef-omnibus-install',
+            'blob': response,
+            'capture': False
+        }
+        self.assertItemsEqual(expected_kwargs, called_kwargs)
+
+
+class TestChef(FilesystemMockingTestCase):
+
     def setUp(self):
         super(TestChef, self).setUp()
-        self.tmp = tempfile.mkdtemp()
-        self.addCleanup(shutil.rmtree, self.tmp)
+        self.tmp = self.tmp_dir()
 
     def fetch_cloud(self, distro_kind):
         cls = distros.fetch(distro_kind)
@@ -43,8 +107,8 @@ class TestChef(t_help.FilesystemMockingTestCase):
         for d in cc_chef.CHEF_DIRS:
             self.assertFalse(os.path.isdir(d))
 
-    @t_help.skipIf(not os.path.isfile(CLIENT_TEMPL),
-                   CLIENT_TEMPL + " is not available")
+    @skipIf(not os.path.isfile(CLIENT_TEMPL),
+            CLIENT_TEMPL + " is not available")
     def test_basic_config(self):
         """
         test basic config looks sane
@@ -122,8 +186,8 @@ class TestChef(t_help.FilesystemMockingTestCase):
                 'c': 'd',
             }, json.loads(c))
 
-    @t_help.skipIf(not os.path.isfile(CLIENT_TEMPL),
-                   CLIENT_TEMPL + " is not available")
+    @skipIf(not os.path.isfile(CLIENT_TEMPL),
+            CLIENT_TEMPL + " is not available")
     def test_template_deletes(self):
         tpl_file = util.load_file('templates/chef_client.rb.tmpl')
         self.patchUtils(self.tmp)
@@ -143,8 +207,8 @@ class TestChef(t_help.FilesystemMockingTestCase):
         self.assertNotIn('json_attribs', c)
         self.assertNotIn('Formatter.show_time', c)
 
-    @t_help.skipIf(not os.path.isfile(CLIENT_TEMPL),
-                   CLIENT_TEMPL + " is not available")
+    @skipIf(not os.path.isfile(CLIENT_TEMPL),
+            CLIENT_TEMPL + " is not available")
     def test_validation_cert_and_validation_key(self):
         # test validation_cert content is written to validation_key path
         tpl_file = util.load_file('templates/chef_client.rb.tmpl')
diff --git a/tests/unittests/test_handler/test_handler_debug.py b/tests/unittests/test_handler/test_handler_debug.py
index 929f786..787ba35 100644
--- a/tests/unittests/test_handler/test_handler_debug.py
+++ b/tests/unittests/test_handler/test_handler_debug.py
@@ -11,7 +11,7 @@ from cloudinit import util
 
 from cloudinit.sources import DataSourceNone
 
-from .. import helpers as t_help
+from cloudinit.tests.helpers import (FilesystemMockingTestCase, mock)
 
 import logging
 import shutil
@@ -20,7 +20,8 @@ import tempfile
 LOG = logging.getLogger(__name__)
 
 
-class TestDebug(t_help.FilesystemMockingTestCase):
+@mock.patch('cloudinit.distros.debian.read_system_locale')
+class TestDebug(FilesystemMockingTestCase):
     def setUp(self):
         super(TestDebug, self).setUp()
         self.new_root = tempfile.mkdtemp()
@@ -36,7 +37,8 @@ class TestDebug(t_help.FilesystemMockingTestCase):
             ds.metadata.update(metadata)
         return cloud.Cloud(ds, paths, {}, d, None)
 
-    def test_debug_write(self):
+    def test_debug_write(self, m_locale):
+        m_locale.return_value = 'en_US.UTF-8'
         cfg = {
             'abc': '123',
             'c': u'\u20a0',
@@ -54,7 +56,8 @@ class TestDebug(t_help.FilesystemMockingTestCase):
         for k in cfg.keys():
             self.assertIn(k, contents)
 
-    def test_debug_no_write(self):
+    def test_debug_no_write(self, m_locale):
+        m_locale.return_value = 'en_US.UTF-8'
         cfg = {
             'abc': '123',
             'debug': {
diff --git a/tests/unittests/test_handler/test_handler_disk_setup.py b/tests/unittests/test_handler/test_handler_disk_setup.py
index 8a6d49e..5afcaca 100644
--- a/tests/unittests/test_handler/test_handler_disk_setup.py
+++ b/tests/unittests/test_handler/test_handler_disk_setup.py
@@ -3,7 +3,7 @@
 import random
 
 from cloudinit.config import cc_disk_setup
-from ..helpers import CiTestCase, ExitStack, mock, TestCase
+from cloudinit.tests.helpers import CiTestCase, ExitStack, mock, TestCase
 
 
 class TestIsDiskUsed(TestCase):
diff --git a/tests/unittests/test_handler/test_handler_growpart.py b/tests/unittests/test_handler/test_handler_growpart.py
index c5fc8c9..a3e4635 100644
--- a/tests/unittests/test_handler/test_handler_growpart.py
+++ b/tests/unittests/test_handler/test_handler_growpart.py
@@ -4,7 +4,7 @@ from cloudinit import cloud
 from cloudinit.config import cc_growpart
 from cloudinit import util
 
-from ..helpers import TestCase
+from cloudinit.tests.helpers import TestCase
 
 import errno
 import logging
diff --git a/tests/unittests/test_handler/test_handler_landscape.py b/tests/unittests/test_handler/test_handler_landscape.py
new file mode 100644
index 0000000..db92a7e
--- /dev/null
+++ b/tests/unittests/test_handler/test_handler_landscape.py
@@ -0,0 +1,130 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.config import cc_landscape
+from cloudinit import (distros, helpers, cloud, util)
+from cloudinit.sources import DataSourceNone
+from cloudinit.tests.helpers import (FilesystemMockingTestCase, mock,
+                                     wrap_and_call)
+
+from configobj import ConfigObj
+import logging
+
+
+LOG = logging.getLogger(__name__)
+
+
+class TestLandscape(FilesystemMockingTestCase):
+
+    with_logs = True
+
+    def setUp(self):
+        super(TestLandscape, self).setUp()
+        self.new_root = self.tmp_dir()
+        self.conf = self.tmp_path('client.conf', self.new_root)
+        self.default_file = self.tmp_path('default_landscape', self.new_root)
+
+    def _get_cloud(self, distro):
+        self.patchUtils(self.new_root)
+        paths = helpers.Paths({'templates_dir': self.new_root})
+        cls = distros.fetch(distro)
+        mydist = cls(distro, {}, paths)
+        myds = DataSourceNone.DataSourceNone({}, mydist, paths)
+        return cloud.Cloud(myds, paths, {}, mydist, None)
+
+    def test_handler_skips_empty_landscape_cloudconfig(self):
+        """Empty landscape cloud-config section does no work."""
+        mycloud = self._get_cloud('ubuntu')
+        mycloud.distro = mock.MagicMock()
+        cfg = {'landscape': {}}
+        cc_landscape.handle('notimportant', cfg, mycloud, LOG, None)
+        self.assertFalse(mycloud.distro.install_packages.called)
+
+    def test_handler_error_on_invalid_landscape_type(self):
+        """Raise an error when landscape configuraiton option is invalid."""
+        mycloud = self._get_cloud('ubuntu')
+        cfg = {'landscape': 'wrongtype'}
+        with self.assertRaises(RuntimeError) as context_manager:
+            cc_landscape.handle('notimportant', cfg, mycloud, LOG, None)
+        self.assertIn(
+            "'landscape' key existed in config, but not a dict",
+            str(context_manager.exception))
+
+    @mock.patch('cloudinit.config.cc_landscape.util')
+    def test_handler_restarts_landscape_client(self, m_util):
+        """handler restarts lansdscape-client after install."""
+        mycloud = self._get_cloud('ubuntu')
+        cfg = {'landscape': {'client': {}}}
+        wrap_and_call(
+            'cloudinit.config.cc_landscape',
+            {'LSC_CLIENT_CFG_FILE': {'new': self.conf}},
+            cc_landscape.handle, 'notimportant', cfg, mycloud, LOG, None)
+        self.assertEqual(
+            [mock.call(['service', 'landscape-client', 'restart'])],
+            m_util.subp.call_args_list)
+
+    def test_handler_installs_client_and_creates_config_file(self):
+        """Write landscape client.conf and install landscape-client."""
+        mycloud = self._get_cloud('ubuntu')
+        cfg = {'landscape': {'client': {}}}
+        expected = {'client': {
+            'log_level': 'info',
+            'url': 'https://landscape.canonical.com/message-system',
+            'ping_url': 'http://landscape.canonical.com/ping',
+            'data_path': '/var/lib/landscape/client'}}
+        mycloud.distro = mock.MagicMock()
+        wrap_and_call(
+            'cloudinit.config.cc_landscape',
+            {'LSC_CLIENT_CFG_FILE': {'new': self.conf},
+             'LS_DEFAULT_FILE': {'new': self.default_file}},
+            cc_landscape.handle, 'notimportant', cfg, mycloud, LOG, None)
+        self.assertEqual(
+            [mock.call('landscape-client')],
+            mycloud.distro.install_packages.call_args)
+        self.assertEqual(expected, dict(ConfigObj(self.conf)))
+        self.assertIn(
+            'Wrote landscape config file to {0}'.format(self.conf),
+            self.logs.getvalue())
+        default_content = util.load_file(self.default_file)
+        self.assertEqual('RUN=1\n', default_content)
+
+    def test_handler_writes_merged_client_config_file_with_defaults(self):
+        """Merge and write options from LSC_CLIENT_CFG_FILE with defaults."""
+        # Write existing sparse client.conf file
+        util.write_file(self.conf, '[client]\ncomputer_title = My PC\n')
+        mycloud = self._get_cloud('ubuntu')
+        cfg = {'landscape': {'client': {}}}
+        expected = {'client': {
+            'log_level': 'info',
+            'url': 'https://landscape.canonical.com/message-system',
+            'ping_url': 'http://landscape.canonical.com/ping',
+            'data_path': '/var/lib/landscape/client',
+            'computer_title': 'My PC'}}
+        wrap_and_call(
+            'cloudinit.config.cc_landscape',
+            {'LSC_CLIENT_CFG_FILE': {'new': self.conf}},
+            cc_landscape.handle, 'notimportant', cfg, mycloud, LOG, None)
+        self.assertEqual(expected, dict(ConfigObj(self.conf)))
+        self.assertIn(
+            'Wrote landscape config file to {0}'.format(self.conf),
+            self.logs.getvalue())
+
+    def test_handler_writes_merged_provided_cloudconfig_with_defaults(self):
+        """Merge and write options from cloud-config options with defaults."""
+        # Write empty sparse client.conf file
+        util.write_file(self.conf, '')
+        mycloud = self._get_cloud('ubuntu')
+        cfg = {'landscape': {'client': {'computer_title': 'My PC'}}}
+        expected = {'client': {
+            'log_level': 'info',
+            'url': 'https://landscape.canonical.com/message-system',
+            'ping_url': 'http://landscape.canonical.com/ping',
+            'data_path': '/var/lib/landscape/client',
+            'computer_title': 'My PC'}}
+        wrap_and_call(
+            'cloudinit.config.cc_landscape',
+            {'LSC_CLIENT_CFG_FILE': {'new': self.conf}},
+            cc_landscape.handle, 'notimportant', cfg, mycloud, LOG, None)
+        self.assertEqual(expected, dict(ConfigObj(self.conf)))
+        self.assertIn(
+            'Wrote landscape config file to {0}'.format(self.conf),
+            self.logs.getvalue())
diff --git a/tests/unittests/test_handler/test_handler_locale.py b/tests/unittests/test_handler/test_handler_locale.py
index e9a810c..e29a06f 100644
--- a/tests/unittests/test_handler/test_handler_locale.py
+++ b/tests/unittests/test_handler/test_handler_locale.py
@@ -13,13 +13,15 @@ from cloudinit import util
 
 from cloudinit.sources import DataSourceNoCloud
 
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 from configobj import ConfigObj
 
 from six import BytesIO
 
 import logging
+import mock
+import os
 import shutil
 import tempfile
 
@@ -27,6 +29,9 @@ LOG = logging.getLogger(__name__)
 
 
 class TestLocale(t_help.FilesystemMockingTestCase):
+
+    with_logs = True
+
     def setUp(self):
         super(TestLocale, self).setUp()
         self.new_root = tempfile.mkdtemp()
@@ -49,9 +54,58 @@ class TestLocale(t_help.FilesystemMockingTestCase):
         }
         cc = self._get_cloud('sles')
         cc_locale.handle('cc_locale', cfg, cc, LOG, [])
+        if cc.distro.uses_systemd():
+            locale_conf = cc.distro.systemd_locale_conf_fn
+        else:
+            locale_conf = cc.distro.locale_conf_fn
+        contents = util.load_file(locale_conf, decode=False)
+        n_cfg = ConfigObj(BytesIO(contents))
+        if cc.distro.uses_systemd():
+            self.assertEqual({'LANG': cfg['locale']}, dict(n_cfg))
+        else:
+            self.assertEqual({'RC_LANG': cfg['locale']}, dict(n_cfg))
+
+    def test_set_locale_sles_default(self):
+        cfg = {}
+        cc = self._get_cloud('sles')
+        cc_locale.handle('cc_locale', cfg, cc, LOG, [])
 
-        contents = util.load_file('/etc/sysconfig/language', decode=False)
+        if cc.distro.uses_systemd():
+            locale_conf = cc.distro.systemd_locale_conf_fn
+            keyname = 'LANG'
+        else:
+            locale_conf = cc.distro.locale_conf_fn
+            keyname = 'RC_LANG'
+
+        contents = util.load_file(locale_conf, decode=False)
         n_cfg = ConfigObj(BytesIO(contents))
-        self.assertEqual({'RC_LANG': cfg['locale']}, dict(n_cfg))
+        self.assertEqual({keyname: 'en_US.UTF-8'}, dict(n_cfg))
+
+    def test_locale_update_config_if_different_than_default(self):
+        """Test cc_locale writes updates conf if different than default"""
+        locale_conf = os.path.join(self.new_root, "etc/default/locale")
+        util.write_file(locale_conf, 'LANG="en_US.UTF-8"\n')
+        cfg = {'locale': 'C.UTF-8'}
+        cc = self._get_cloud('ubuntu')
+        with mock.patch('cloudinit.distros.debian.util.subp') as m_subp:
+            with mock.patch('cloudinit.distros.debian.LOCALE_CONF_FN',
+                            locale_conf):
+                cc_locale.handle('cc_locale', cfg, cc, LOG, [])
+                m_subp.assert_called_with(['update-locale',
+                                           '--locale-file=%s' % locale_conf,
+                                           'LANG=C.UTF-8'], capture=False)
+
+    def test_locale_rhel_defaults_en_us_utf8(self):
+        """Test cc_locale gets en_US.UTF-8 from distro get_locale fallback"""
+        cfg = {}
+        cc = self._get_cloud('rhel')
+        update_sysconfig = 'cloudinit.distros.rhel_util.update_sysconfig_file'
+        with mock.patch.object(cc.distro, 'uses_systemd') as m_use_sd:
+            m_use_sd.return_value = True
+            with mock.patch(update_sysconfig) as m_update_syscfg:
+                cc_locale.handle('cc_locale', cfg, cc, LOG, [])
+                m_update_syscfg.assert_called_with('/etc/locale.conf',
+                                                   {'LANG': 'en_US.UTF-8'})
+
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_lxd.py b/tests/unittests/test_handler/test_handler_lxd.py
index 351226b..f132a77 100644
--- a/tests/unittests/test_handler/test_handler_lxd.py
+++ b/tests/unittests/test_handler/test_handler_lxd.py
@@ -3,7 +3,7 @@
 from cloudinit.config import cc_lxd
 from cloudinit.sources import DataSourceNoCloud
 from cloudinit import (distros, helpers, cloud)
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 import logging
 
diff --git a/tests/unittests/test_handler/test_handler_mcollective.py b/tests/unittests/test_handler/test_handler_mcollective.py
index 2a9f382..7eec735 100644
--- a/tests/unittests/test_handler/test_handler_mcollective.py
+++ b/tests/unittests/test_handler/test_handler_mcollective.py
@@ -4,7 +4,7 @@ from cloudinit import (cloud, distros, helpers, util)
 from cloudinit.config import cc_mcollective
 from cloudinit.sources import DataSourceNoCloud
 
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 import configobj
 import logging
diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py
index 650ca0e..fe492d4 100644
--- a/tests/unittests/test_handler/test_handler_mounts.py
+++ b/tests/unittests/test_handler/test_handler_mounts.py
@@ -6,7 +6,7 @@ import tempfile
 
 from cloudinit.config import cc_mounts
 
-from .. import helpers as test_helpers
+from cloudinit.tests import helpers as test_helpers
 
 try:
     from unittest import mock
diff --git a/tests/unittests/test_handler/test_handler_ntp.py b/tests/unittests/test_handler/test_handler_ntp.py
index 7f27864..4f29124 100644
--- a/tests/unittests/test_handler/test_handler_ntp.py
+++ b/tests/unittests/test_handler/test_handler_ntp.py
@@ -3,7 +3,7 @@
 from cloudinit.config import cc_ntp
 from cloudinit.sources import DataSourceNone
 from cloudinit import (distros, helpers, cloud, util)
-from ..helpers import FilesystemMockingTestCase, mock, skipIf
+from cloudinit.tests.helpers import FilesystemMockingTestCase, mock, skipIf
 
 
 import os
@@ -16,6 +16,14 @@ servers {{servers}}
 pools {{pools}}
 """
 
+TIMESYNCD_TEMPLATE = b"""\
+## template:jinja
+[Time]
+{% if servers or pools -%}
+NTP={% for host in servers|list + pools|list %}{{ host }} {% endfor -%}
+{% endif -%}
+"""
+
 try:
     import jsonschema
     assert jsonschema  # avoid pyflakes error F401: import unused
@@ -59,6 +67,14 @@ class TestNtp(FilesystemMockingTestCase):
         cc_ntp.install_ntp(install_func, packages=['ntp'], check_exe='ntpd')
         install_func.assert_not_called()
 
+    @mock.patch("cloudinit.config.cc_ntp.util")
+    def test_ntp_install_no_op_with_empty_pkg_list(self, mock_util):
+        """ntp_install calls install_func with empty list"""
+        mock_util.which.return_value = None  # check_exe not found
+        install_func = mock.MagicMock()
+        cc_ntp.install_ntp(install_func, packages=[], check_exe='timesyncd')
+        install_func.assert_called_once_with([])
+
     def test_ntp_rename_ntp_conf(self):
         """When NTP_CONF exists, rename_ntp moves it."""
         ntpconf = self.tmp_path("ntp.conf", self.new_root)
@@ -68,6 +84,30 @@ class TestNtp(FilesystemMockingTestCase):
         self.assertFalse(os.path.exists(ntpconf))
         self.assertTrue(os.path.exists("{0}.dist".format(ntpconf)))
 
+    @mock.patch("cloudinit.config.cc_ntp.util")
+    def test_reload_ntp_defaults(self, mock_util):
+        """Test service is restarted/reloaded (defaults)"""
+        service = 'ntp'
+        cmd = ['service', service, 'restart']
+        cc_ntp.reload_ntp(service)
+        mock_util.subp.assert_called_with(cmd, capture=True)
+
+    @mock.patch("cloudinit.config.cc_ntp.util")
+    def test_reload_ntp_systemd(self, mock_util):
+        """Test service is restarted/reloaded (systemd)"""
+        service = 'ntp'
+        cmd = ['systemctl', 'reload-or-restart', service]
+        cc_ntp.reload_ntp(service, systemd=True)
+        mock_util.subp.assert_called_with(cmd, capture=True)
+
+    @mock.patch("cloudinit.config.cc_ntp.util")
+    def test_reload_ntp_systemd_timesycnd(self, mock_util):
+        """Test service is restarted/reloaded (systemd/timesyncd)"""
+        service = 'systemd-timesycnd'
+        cmd = ['systemctl', 'reload-or-restart', service]
+        cc_ntp.reload_ntp(service, systemd=True)
+        mock_util.subp.assert_called_with(cmd, capture=True)
+
     def test_ntp_rename_ntp_conf_skip_missing(self):
         """When NTP_CONF doesn't exist rename_ntp doesn't create a file."""
         ntpconf = self.tmp_path("ntp.conf", self.new_root)
@@ -94,7 +134,7 @@ class TestNtp(FilesystemMockingTestCase):
         with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
             stream.write(NTP_TEMPLATE)
         with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.write_ntp_config_template(cfg, mycloud)
+            cc_ntp.write_ntp_config_template(cfg, mycloud, ntp_conf)
         content = util.read_file_or_url('file://' + ntp_conf).contents
         self.assertEqual(
             "servers ['192.168.2.1', '192.168.2.2']\npools []\n",
@@ -120,7 +160,7 @@ class TestNtp(FilesystemMockingTestCase):
         with open('{0}.{1}.tmpl'.format(ntp_conf, distro), 'wb') as stream:
             stream.write(NTP_TEMPLATE)
         with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.write_ntp_config_template(cfg, mycloud)
+            cc_ntp.write_ntp_config_template(cfg, mycloud, ntp_conf)
         content = util.read_file_or_url('file://' + ntp_conf).contents
         self.assertEqual(
             "servers []\npools ['10.0.0.1', '10.0.0.2']\n",
@@ -139,7 +179,7 @@ class TestNtp(FilesystemMockingTestCase):
         with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
             stream.write(NTP_TEMPLATE)
         with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.write_ntp_config_template({}, mycloud)
+            cc_ntp.write_ntp_config_template({}, mycloud, ntp_conf)
         content = util.read_file_or_url('file://' + ntp_conf).contents
         default_pools = [
             "{0}.{1}.pool.ntp.org".format(x, distro)
@@ -152,7 +192,8 @@ class TestNtp(FilesystemMockingTestCase):
                 ",".join(default_pools)),
             self.logs.getvalue())
 
-    def test_ntp_handler_mocked_template(self):
+    @mock.patch("cloudinit.config.cc_ntp.ntp_installable")
+    def test_ntp_handler_mocked_template(self, m_ntp_install):
         """Test ntp handler renders ubuntu ntp.conf template."""
         pools = ['0.mycompany.pool.ntp.org', '3.mycompany.pool.ntp.org']
         servers = ['192.168.23.3', '192.168.23.4']
@@ -164,6 +205,8 @@ class TestNtp(FilesystemMockingTestCase):
         }
         mycloud = self._get_cloud('ubuntu')
         ntp_conf = self.tmp_path('ntp.conf', self.new_root)  # Doesn't exist
+        m_ntp_install.return_value = True
+
         # Create ntp.conf.tmpl
         with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
             stream.write(NTP_TEMPLATE)
@@ -176,6 +219,34 @@ class TestNtp(FilesystemMockingTestCase):
             'servers {0}\npools {1}\n'.format(servers, pools),
             content.decode())
 
+    @mock.patch("cloudinit.config.cc_ntp.util")
+    def test_ntp_handler_mocked_template_snappy(self, m_util):
+        """Test ntp handler renders timesycnd.conf template on snappy."""
+        pools = ['0.mycompany.pool.ntp.org', '3.mycompany.pool.ntp.org']
+        servers = ['192.168.23.3', '192.168.23.4']
+        cfg = {
+            'ntp': {
+                'pools': pools,
+                'servers': servers
+            }
+        }
+        mycloud = self._get_cloud('ubuntu')
+        m_util.system_is_snappy.return_value = True
+
+        # Create timesyncd.conf.tmpl
+        tsyncd_conf = self.tmp_path("timesyncd.conf", self.new_root)
+        template = '{0}.tmpl'.format(tsyncd_conf)
+        with open(template, 'wb') as stream:
+            stream.write(TIMESYNCD_TEMPLATE)
+
+        with mock.patch('cloudinit.config.cc_ntp.TIMESYNCD_CONF', tsyncd_conf):
+            cc_ntp.handle('notimportant', cfg, mycloud, None, None)
+
+        content = util.read_file_or_url('file://' + tsyncd_conf).contents
+        self.assertEqual(
+            "[Time]\nNTP=%s %s \n" % (" ".join(servers), " ".join(pools)),
+            content.decode())
+
     def test_ntp_handler_real_distro_templates(self):
         """Test ntp handler renders the shipped distro ntp.conf templates."""
         pools = ['0.mycompany.pool.ntp.org', '3.mycompany.pool.ntp.org']
@@ -333,4 +404,30 @@ class TestNtp(FilesystemMockingTestCase):
             "pools ['0.mypool.org', '0.mypool.org']\n",
             content)
 
+    @mock.patch("cloudinit.config.cc_ntp.ntp_installable")
+    def test_ntp_handler_timesyncd(self, m_ntp_install):
+        """Test ntp handler configures timesyncd"""
+        m_ntp_install.return_value = False
+        distro = 'ubuntu'
+        cfg = {
+            'servers': ['192.168.2.1', '192.168.2.2'],
+            'pools': ['0.mypool.org'],
+        }
+        mycloud = self._get_cloud(distro)
+        tsyncd_conf = self.tmp_path("timesyncd.conf", self.new_root)
+        # Create timesyncd.conf.tmpl
+        template = '{0}.tmpl'.format(tsyncd_conf)
+        print(template)
+        with open(template, 'wb') as stream:
+            stream.write(TIMESYNCD_TEMPLATE)
+        with mock.patch('cloudinit.config.cc_ntp.TIMESYNCD_CONF', tsyncd_conf):
+            cc_ntp.write_ntp_config_template(cfg, mycloud, tsyncd_conf,
+                                             template='timesyncd.conf')
+
+        content = util.read_file_or_url('file://' + tsyncd_conf).contents
+        self.assertEqual(
+            "[Time]\nNTP=192.168.2.1 192.168.2.2 0.mypool.org \n",
+            content.decode())
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_power_state.py b/tests/unittests/test_handler/test_handler_power_state.py
index e382210..85a0fe0 100644
--- a/tests/unittests/test_handler/test_handler_power_state.py
+++ b/tests/unittests/test_handler/test_handler_power_state.py
@@ -4,8 +4,8 @@ import sys
 
 from cloudinit.config import cc_power_state_change as psc
 
-from .. import helpers as t_help
-from ..helpers import mock
+from cloudinit.tests import helpers as t_help
+from cloudinit.tests.helpers import mock
 
 
 class TestLoadPowerState(t_help.TestCase):
diff --git a/tests/unittests/test_handler/test_handler_puppet.py b/tests/unittests/test_handler/test_handler_puppet.py
new file mode 100644
index 0000000..0b6e3b5
--- /dev/null
+++ b/tests/unittests/test_handler/test_handler_puppet.py
@@ -0,0 +1,142 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.config import cc_puppet
+from cloudinit.sources import DataSourceNone
+from cloudinit import (distros, helpers, cloud, util)
+from cloudinit.tests.helpers import CiTestCase, mock
+
+import logging
+
+
+LOG = logging.getLogger(__name__)
+
+
+@mock.patch('cloudinit.config.cc_puppet.util')
+@mock.patch('cloudinit.config.cc_puppet.os')
+class TestAutostartPuppet(CiTestCase):
+
+    with_logs = True
+
+    def test_wb_autostart_puppet_updates_puppet_default(self, m_os, m_util):
+        """Update /etc/default/puppet to autostart if it exists."""
+
+        def _fake_exists(path):
+            return path == '/etc/default/puppet'
+
+        m_os.path.exists.side_effect = _fake_exists
+        cc_puppet._autostart_puppet(LOG)
+        self.assertEqual(
+            [mock.call(['sed', '-i', '-e', 's/^START=.*/START=yes/',
+                        '/etc/default/puppet'], capture=False)],
+            m_util.subp.call_args_list)
+
+    def test_wb_autostart_pupppet_enables_puppet_systemctl(self, m_os, m_util):
+        """If systemctl is present, enable puppet via systemctl."""
+
+        def _fake_exists(path):
+            return path == '/bin/systemctl'
+
+        m_os.path.exists.side_effect = _fake_exists
+        cc_puppet._autostart_puppet(LOG)
+        expected_calls = [mock.call(
+            ['/bin/systemctl', 'enable', 'puppet.service'], capture=False)]
+        self.assertEqual(expected_calls, m_util.subp.call_args_list)
+
+    def test_wb_autostart_pupppet_enables_puppet_chkconfig(self, m_os, m_util):
+        """If chkconfig is present, enable puppet via checkcfg."""
+
+        def _fake_exists(path):
+            return path == '/sbin/chkconfig'
+
+        m_os.path.exists.side_effect = _fake_exists
+        cc_puppet._autostart_puppet(LOG)
+        expected_calls = [mock.call(
+            ['/sbin/chkconfig', 'puppet', 'on'], capture=False)]
+        self.assertEqual(expected_calls, m_util.subp.call_args_list)
+
+
+@mock.patch('cloudinit.config.cc_puppet._autostart_puppet')
+class TestPuppetHandle(CiTestCase):
+
+    with_logs = True
+
+    def setUp(self):
+        super(TestPuppetHandle, self).setUp()
+        self.new_root = self.tmp_dir()
+        self.conf = self.tmp_path('puppet.conf')
+
+    def _get_cloud(self, distro):
+        paths = helpers.Paths({'templates_dir': self.new_root})
+        cls = distros.fetch(distro)
+        mydist = cls(distro, {}, paths)
+        myds = DataSourceNone.DataSourceNone({}, mydist, paths)
+        return cloud.Cloud(myds, paths, {}, mydist, None)
+
+    def test_handler_skips_missing_puppet_key_in_cloudconfig(self, m_auto):
+        """Cloud-config containing no 'puppet' key is skipped."""
+        mycloud = self._get_cloud('ubuntu')
+        cfg = {}
+        cc_puppet.handle('notimportant', cfg, mycloud, LOG, None)
+        self.assertIn(
+            "no 'puppet' configuration found", self.logs.getvalue())
+        self.assertEqual(0, m_auto.call_count)
+
+    @mock.patch('cloudinit.config.cc_puppet.util.subp')
+    def test_handler_puppet_config_starts_puppet_service(self, m_subp, m_auto):
+        """Cloud-config 'puppet' configuration starts puppet."""
+        mycloud = self._get_cloud('ubuntu')
+        cfg = {'puppet': {'install': False}}
+        cc_puppet.handle('notimportant', cfg, mycloud, LOG, None)
+        self.assertEqual(1, m_auto.call_count)
+        self.assertEqual(
+            [mock.call(['service', 'puppet', 'start'], capture=False)],
+            m_subp.call_args_list)
+
+    @mock.patch('cloudinit.config.cc_puppet.util.subp')
+    def test_handler_empty_puppet_config_installs_puppet(self, m_subp, m_auto):
+        """Cloud-config empty 'puppet' configuration installs latest puppet."""
+        mycloud = self._get_cloud('ubuntu')
+        mycloud.distro = mock.MagicMock()
+        cfg = {'puppet': {}}
+        cc_puppet.handle('notimportant', cfg, mycloud, LOG, None)
+        self.assertEqual(
+            [mock.call(('puppet', None))],
+            mycloud.distro.install_packages.call_args_list)
+
+    @mock.patch('cloudinit.config.cc_puppet.util.subp')
+    def test_handler_puppet_config_installs_puppet_on_true(self, m_subp, _):
+        """Cloud-config with 'puppet' key installs when 'install' is True."""
+        mycloud = self._get_cloud('ubuntu')
+        mycloud.distro = mock.MagicMock()
+        cfg = {'puppet': {'install': True}}
+        cc_puppet.handle('notimportant', cfg, mycloud, LOG, None)
+        self.assertEqual(
+            [mock.call(('puppet', None))],
+            mycloud.distro.install_packages.call_args_list)
+
+    @mock.patch('cloudinit.config.cc_puppet.util.subp')
+    def test_handler_puppet_config_installs_puppet_version(self, m_subp, _):
+        """Cloud-config 'puppet' configuration can specify a version."""
+        mycloud = self._get_cloud('ubuntu')
+        mycloud.distro = mock.MagicMock()
+        cfg = {'puppet': {'version': '3.8'}}
+        cc_puppet.handle('notimportant', cfg, mycloud, LOG, None)
+        self.assertEqual(
+            [mock.call(('puppet', '3.8'))],
+            mycloud.distro.install_packages.call_args_list)
+
+    @mock.patch('cloudinit.config.cc_puppet.util.subp')
+    def test_handler_puppet_config_updates_puppet_conf(self, m_subp, m_auto):
+        """When 'conf' is provided update values in PUPPET_CONF_PATH."""
+        mycloud = self._get_cloud('ubuntu')
+        cfg = {
+            'puppet': {
+                'conf': {'agent': {'server': 'puppetmaster.example.org'}}}}
+        util.write_file(self.conf, '[agent]\nserver = origpuppet\nother = 3')
+        puppet_conf_path = 'cloudinit.config.cc_puppet.PUPPET_CONF_PATH'
+        mycloud.distro = mock.MagicMock()
+        with mock.patch(puppet_conf_path, self.conf):
+            cc_puppet.handle('notimportant', cfg, mycloud, LOG, None)
+        content = util.load_file(self.conf)
+        expected = '[agent]\nserver = puppetmaster.example.org\nother = 3\n\n'
+        self.assertEqual(expected, content)
diff --git a/tests/unittests/test_handler/test_handler_resizefs.py b/tests/unittests/test_handler/test_handler_resizefs.py
index 52591b8..3e5d436 100644
--- a/tests/unittests/test_handler/test_handler_resizefs.py
+++ b/tests/unittests/test_handler/test_handler_resizefs.py
@@ -1,17 +1,30 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from cloudinit.config import cc_resizefs
+from cloudinit.config.cc_resizefs import (
+    can_skip_resize, handle, is_device_path_writable_block,
+    rootdev_from_cmdline)
 
+import logging
 import textwrap
-import unittest
+
+from cloudinit.tests.helpers import (CiTestCase, mock, skipIf, util,
+                                     wrap_and_call)
+
+
+LOG = logging.getLogger(__name__)
+
 
 try:
-    from unittest import mock
+    import jsonschema
+    assert jsonschema  # avoid pyflakes error F401: import unused
+    _missing_jsonschema_dep = False
 except ImportError:
-    import mock
+    _missing_jsonschema_dep = True
+
 
+class TestResizefs(CiTestCase):
+    with_logs = True
 
-class TestResizefs(unittest.TestCase):
     def setUp(self):
         super(TestResizefs, self).setUp()
         self.name = "resizefs"
@@ -34,7 +47,7 @@ class TestResizefs(unittest.TestCase):
               58720296   3145728    3  freebsd-swap  (1.5G)
               61866024   1048496       - free -  (512M)
             """)
-        res = cc_resizefs.can_skip_resize(fs_type, resize_what, devpth)
+        res = can_skip_resize(fs_type, resize_what, devpth)
         self.assertTrue(res)
 
     @mock.patch('cloudinit.config.cc_resizefs._get_dumpfs_output')
@@ -52,8 +65,210 @@ class TestResizefs(unittest.TestCase):
             =>      34  297086  da0  GPT  (145M)
                     34  297086    1  freebsd-ufs  (145M)
             """)
-        res = cc_resizefs.can_skip_resize(fs_type, resize_what, devpth)
+        res = can_skip_resize(fs_type, resize_what, devpth)
         self.assertTrue(res)
 
+    def test_handle_noops_on_disabled(self):
+        """The handle function logs when the configuration disables resize."""
+        cfg = {'resize_rootfs': False}
+        handle('cc_resizefs', cfg, _cloud=None, log=LOG, args=[])
+        self.assertIn(
+            'DEBUG: Skipping module named cc_resizefs, resizing disabled\n',
+            self.logs.getvalue())
+
+    @skipIf(_missing_jsonschema_dep, "No python-jsonschema dependency")
+    def test_handle_schema_validation_logs_invalid_resize_rootfs_value(self):
+        """The handle reports json schema violations as a warning.
+
+        Invalid values for resize_rootfs result in disabling the module.
+        """
+        cfg = {'resize_rootfs': 'junk'}
+        handle('cc_resizefs', cfg, _cloud=None, log=LOG, args=[])
+        logs = self.logs.getvalue()
+        self.assertIn(
+            "WARNING: Invalid config:\nresize_rootfs: 'junk' is not one of"
+            " [True, False, 'noblock']",
+            logs)
+        self.assertIn(
+            'DEBUG: Skipping module named cc_resizefs, resizing disabled\n',
+            logs)
+
+    @mock.patch('cloudinit.config.cc_resizefs.util.get_mount_info')
+    def test_handle_warns_on_unknown_mount_info(self, m_get_mount_info):
+        """handle warns when get_mount_info sees unknown filesystem for /."""
+        m_get_mount_info.return_value = None
+        cfg = {'resize_rootfs': True}
+        handle('cc_resizefs', cfg, _cloud=None, log=LOG, args=[])
+        logs = self.logs.getvalue()
+        self.assertNotIn("WARNING: Invalid config:\nresize_rootfs:", logs)
+        self.assertIn(
+            'WARNING: Could not determine filesystem type of /\n',
+            logs)
+        self.assertEqual(
+            [mock.call('/', LOG)],
+            m_get_mount_info.call_args_list)
+
+    def test_handle_warns_on_undiscoverable_root_path_in_commandline(self):
+        """handle noops when the root path is not found on the commandline."""
+        cfg = {'resize_rootfs': True}
+        exists_mock_path = 'cloudinit.config.cc_resizefs.os.path.exists'
+
+        def fake_mount_info(path, log):
+            self.assertEqual('/', path)
+            self.assertEqual(LOG, log)
+            return ('/dev/root', 'ext4', '/')
+
+        with mock.patch(exists_mock_path) as m_exists:
+            m_exists.return_value = False
+            wrap_and_call(
+                'cloudinit.config.cc_resizefs.util',
+                {'is_container': {'return_value': False},
+                 'get_mount_info': {'side_effect': fake_mount_info},
+                 'get_cmdline': {'return_value': 'BOOT_IMAGE=/vmlinuz.efi'}},
+                handle, 'cc_resizefs', cfg, _cloud=None, log=LOG,
+                args=[])
+        logs = self.logs.getvalue()
+        self.assertIn("WARNING: Unable to find device '/dev/root'", logs)
+
+
+class TestRootDevFromCmdline(CiTestCase):
+
+    def test_rootdev_from_cmdline_with_no_root(self):
+        """Return None from rootdev_from_cmdline when root is not present."""
+        invalid_cases = [
+            'BOOT_IMAGE=/adsf asdfa werasef  root adf', 'BOOT_IMAGE=/adsf', '']
+        for case in invalid_cases:
+            self.assertIsNone(rootdev_from_cmdline(case))
+
+    def test_rootdev_from_cmdline_with_root_startswith_dev(self):
+        """Return the cmdline root when the path starts with /dev."""
+        self.assertEqual(
+            '/dev/this', rootdev_from_cmdline('asdf root=/dev/this'))
+
+    def test_rootdev_from_cmdline_with_root_without_dev_prefix(self):
+        """Add /dev prefix to cmdline root when the path lacks the prefix."""
+        self.assertEqual('/dev/this', rootdev_from_cmdline('asdf root=this'))
+
+    def test_rootdev_from_cmdline_with_root_with_label(self):
+        """When cmdline root contains a LABEL, our root is disk/by-label."""
+        self.assertEqual(
+            '/dev/disk/by-label/unique',
+            rootdev_from_cmdline('asdf root=LABEL=unique'))
+
+    def test_rootdev_from_cmdline_with_root_with_uuid(self):
+        """When cmdline root contains a UUID, our root is disk/by-uuid."""
+        self.assertEqual(
+            '/dev/disk/by-uuid/adsfdsaf-adsf',
+            rootdev_from_cmdline('asdf root=UUID=adsfdsaf-adsf'))
+
+
+class TestIsDevicePathWritableBlock(CiTestCase):
+
+    with_logs = True
+
+    def test_is_device_path_writable_block_false_on_overlayroot(self):
+        """When devpath is overlayroot (on MAAS), is_dev_writable is False."""
+        info = 'does not matter'
+        is_writable = wrap_and_call(
+            'cloudinit.config.cc_resizefs.util',
+            {'is_container': {'return_value': False}},
+            is_device_path_writable_block, 'overlayroot', info, LOG)
+        self.assertFalse(is_writable)
+        self.assertIn(
+            "Not attempting to resize devpath 'overlayroot'",
+            self.logs.getvalue())
+
+    def test_is_device_path_writable_block_warns_missing_cmdline_root(self):
+        """When root does not exist isn't in the cmdline, log warning."""
+        info = 'does not matter'
+
+        def fake_mount_info(path, log):
+            self.assertEqual('/', path)
+            self.assertEqual(LOG, log)
+            return ('/dev/root', 'ext4', '/')
+
+        exists_mock_path = 'cloudinit.config.cc_resizefs.os.path.exists'
+        with mock.patch(exists_mock_path) as m_exists:
+            m_exists.return_value = False
+            is_writable = wrap_and_call(
+                'cloudinit.config.cc_resizefs.util',
+                {'is_container': {'return_value': False},
+                 'get_mount_info': {'side_effect': fake_mount_info},
+                 'get_cmdline': {'return_value': 'BOOT_IMAGE=/vmlinuz.efi'}},
+                is_device_path_writable_block, '/dev/root', info, LOG)
+        self.assertFalse(is_writable)
+        logs = self.logs.getvalue()
+        self.assertIn("WARNING: Unable to find device '/dev/root'", logs)
+
+    def test_is_device_path_writable_block_does_not_exist(self):
+        """When devpath does not exist, a warning is logged."""
+        info = 'dev=/I/dont/exist mnt_point=/ path=/dev/none'
+        is_writable = wrap_and_call(
+            'cloudinit.config.cc_resizefs.util',
+            {'is_container': {'return_value': False}},
+            is_device_path_writable_block, '/I/dont/exist', info, LOG)
+        self.assertFalse(is_writable)
+        self.assertIn(
+            "WARNING: Device '/I/dont/exist' did not exist."
+            ' cannot resize: %s' % info,
+            self.logs.getvalue())
+
+    def test_is_device_path_writable_block_does_not_exist_in_container(self):
+        """When devpath does not exist in a container, log a debug message."""
+        info = 'dev=/I/dont/exist mnt_point=/ path=/dev/none'
+        is_writable = wrap_and_call(
+            'cloudinit.config.cc_resizefs.util',
+            {'is_container': {'return_value': True}},
+            is_device_path_writable_block, '/I/dont/exist', info, LOG)
+        self.assertFalse(is_writable)
+        self.assertIn(
+            "DEBUG: Device '/I/dont/exist' did not exist in container."
+            ' cannot resize: %s' % info,
+            self.logs.getvalue())
+
+    def test_is_device_path_writable_block_raises_oserror(self):
+        """When unexpected OSError is raises by os.stat it is reraised."""
+        info = 'dev=/I/dont/exist mnt_point=/ path=/dev/none'
+        with self.assertRaises(OSError) as context_manager:
+            wrap_and_call(
+                'cloudinit.config.cc_resizefs',
+                {'util.is_container': {'return_value': True},
+                 'os.stat': {'side_effect': OSError('Something unexpected')}},
+                is_device_path_writable_block, '/I/dont/exist', info, LOG)
+        self.assertEqual(
+            'Something unexpected', str(context_manager.exception))
+
+    def test_is_device_path_writable_block_non_block(self):
+        """When device is not a block device, emit warning return False."""
+        fake_devpath = self.tmp_path('dev/readwrite')
+        util.write_file(fake_devpath, '', mode=0o600)  # read-write
+        info = 'dev=/dev/root mnt_point=/ path={0}'.format(fake_devpath)
+
+        is_writable = wrap_and_call(
+            'cloudinit.config.cc_resizefs.util',
+            {'is_container': {'return_value': False}},
+            is_device_path_writable_block, fake_devpath, info, LOG)
+        self.assertFalse(is_writable)
+        self.assertIn(
+            "WARNING: device '{0}' not a block device. cannot resize".format(
+                fake_devpath),
+            self.logs.getvalue())
+
+    def test_is_device_path_writable_block_non_block_on_container(self):
+        """When device is non-block device in container, emit debug log."""
+        fake_devpath = self.tmp_path('dev/readwrite')
+        util.write_file(fake_devpath, '', mode=0o600)  # read-write
+        info = 'dev=/dev/root mnt_point=/ path={0}'.format(fake_devpath)
+
+        is_writable = wrap_and_call(
+            'cloudinit.config.cc_resizefs.util',
+            {'is_container': {'return_value': True}},
+            is_device_path_writable_block, fake_devpath, info, LOG)
+        self.assertFalse(is_writable)
+        self.assertIn(
+            "DEBUG: device '{0}' not a block device in container."
+            ' cannot resize'.format(fake_devpath),
+            self.logs.getvalue())
+
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_rsyslog.py b/tests/unittests/test_handler/test_handler_rsyslog.py
index cca0667..8c8e283 100644
--- a/tests/unittests/test_handler/test_handler_rsyslog.py
+++ b/tests/unittests/test_handler/test_handler_rsyslog.py
@@ -9,7 +9,7 @@ from cloudinit.config.cc_rsyslog import (
     parse_remotes_line, remotes_to_rsyslog_cfg)
 from cloudinit import util
 
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 
 class TestLoadConfig(t_help.TestCase):
diff --git a/tests/unittests/test_handler/test_handler_runcmd.py b/tests/unittests/test_handler/test_handler_runcmd.py
new file mode 100644
index 0000000..374c1d3
--- /dev/null
+++ b/tests/unittests/test_handler/test_handler_runcmd.py
@@ -0,0 +1,108 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.config import cc_runcmd
+from cloudinit.sources import DataSourceNone
+from cloudinit import (distros, helpers, cloud, util)
+from cloudinit.tests.helpers import FilesystemMockingTestCase, skipIf
+
+import logging
+import os
+import stat
+
+try:
+    import jsonschema
+    assert jsonschema  # avoid pyflakes error F401: import unused
+    _missing_jsonschema_dep = False
+except ImportError:
+    _missing_jsonschema_dep = True
+
+LOG = logging.getLogger(__name__)
+
+
+class TestRuncmd(FilesystemMockingTestCase):
+
+    with_logs = True
+
+    def setUp(self):
+        super(TestRuncmd, self).setUp()
+        self.subp = util.subp
+        self.new_root = self.tmp_dir()
+
+    def _get_cloud(self, distro):
+        self.patchUtils(self.new_root)
+        paths = helpers.Paths({'scripts': self.new_root})
+        cls = distros.fetch(distro)
+        mydist = cls(distro, {}, paths)
+        myds = DataSourceNone.DataSourceNone({}, mydist, paths)
+        paths.datasource = myds
+        return cloud.Cloud(myds, paths, {}, mydist, None)
+
+    def test_handler_skip_if_no_runcmd(self):
+        """When the provided config doesn't contain runcmd, skip it."""
+        cfg = {}
+        mycloud = self._get_cloud('ubuntu')
+        cc_runcmd.handle('notimportant', cfg, mycloud, LOG, None)
+        self.assertIn(
+            "Skipping module named notimportant, no 'runcmd' key",
+            self.logs.getvalue())
+
+    def test_handler_invalid_command_set(self):
+        """Commands which can't be converted to shell will raise errors."""
+        invalid_config = {'runcmd': 1}
+        cc = self._get_cloud('ubuntu')
+        cc_runcmd.handle('cc_runcmd', invalid_config, cc, LOG, [])
+        self.assertIn(
+            'Failed to shellify 1 into file'
+            ' /var/lib/cloud/instances/iid-datasource-none/scripts/runcmd',
+            self.logs.getvalue())
+
+    @skipIf(_missing_jsonschema_dep, "No python-jsonschema dependency")
+    def test_handler_schema_validation_warns_non_array_type(self):
+        """Schema validation warns of non-array type for runcmd key.
+
+        Schema validation is not strict, so runcmd attempts to shellify the
+        invalid content.
+        """
+        invalid_config = {'runcmd': 1}
+        cc = self._get_cloud('ubuntu')
+        cc_runcmd.handle('cc_runcmd', invalid_config, cc, LOG, [])
+        self.assertIn(
+            'Invalid config:\nruncmd: 1 is not of type \'array\'',
+            self.logs.getvalue())
+        self.assertIn('Failed to shellify', self.logs.getvalue())
+
+    @skipIf(_missing_jsonschema_dep, 'No python-jsonschema dependency')
+    def test_handler_schema_validation_warns_non_array_item_type(self):
+        """Schema validation warns of non-array or string runcmd items.
+
+        Schema validation is not strict, so runcmd attempts to shellify the
+        invalid content.
+        """
+        invalid_config = {
+            'runcmd': ['ls /', 20, ['wget', 'http://stuff/blah'], {'a': 'n'}]}
+        cc = self._get_cloud('ubuntu')
+        cc_runcmd.handle('cc_runcmd', invalid_config, cc, LOG, [])
+        expected_warnings = [
+            'runcmd.1: 20 is not valid under any of the given schemas',
+            'runcmd.3: {\'a\': \'n\'} is not valid under any of the given'
+            ' schema'
+        ]
+        logs = self.logs.getvalue()
+        for warning in expected_warnings:
+            self.assertIn(warning, logs)
+        self.assertIn('Failed to shellify', logs)
+
+    def test_handler_write_valid_runcmd_schema_to_file(self):
+        """Valid runcmd schema is written to a runcmd shell script."""
+        valid_config = {'runcmd': [['ls', '/']]}
+        cc = self._get_cloud('ubuntu')
+        cc_runcmd.handle('cc_runcmd', valid_config, cc, LOG, [])
+        runcmd_file = os.path.join(
+            self.new_root,
+            'var/lib/cloud/instances/iid-datasource-none/scripts/runcmd')
+        self.assertEqual("#!/bin/sh\n'ls' '/'\n", util.load_file(runcmd_file))
+        file_stat = os.stat(runcmd_file)
+        self.assertEqual(0o700, stat.S_IMODE(file_stat.st_mode))
+
+
+# vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_seed_random.py b/tests/unittests/test_handler/test_handler_seed_random.py
index e5e607f..f60dedc 100644
--- a/tests/unittests/test_handler/test_handler_seed_random.py
+++ b/tests/unittests/test_handler/test_handler_seed_random.py
@@ -22,7 +22,7 @@ from cloudinit import util
 
 from cloudinit.sources import DataSourceNone
 
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 import logging
 
diff --git a/tests/unittests/test_handler/test_handler_set_hostname.py b/tests/unittests/test_handler/test_handler_set_hostname.py
index 4b18de7..abdc17e 100644
--- a/tests/unittests/test_handler/test_handler_set_hostname.py
+++ b/tests/unittests/test_handler/test_handler_set_hostname.py
@@ -7,7 +7,7 @@ from cloudinit import distros
 from cloudinit import helpers
 from cloudinit import util
 
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 from configobj import ConfigObj
 import logging
@@ -70,7 +70,8 @@ class TestHostname(t_help.FilesystemMockingTestCase):
         cc = cloud.Cloud(ds, paths, {}, distro, None)
         self.patchUtils(self.tmp)
         cc_set_hostname.handle('cc_set_hostname', cfg, cc, LOG, [])
-        contents = util.load_file("/etc/HOSTNAME")
-        self.assertEqual('blah', contents.strip())
+        if not distro.uses_systemd():
+            contents = util.load_file(distro.hostname_conf_fn)
+            self.assertEqual('blah', contents.strip())
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_snappy.py b/tests/unittests/test_handler/test_handler_snappy.py
index e4d0762..76b79c2 100644
--- a/tests/unittests/test_handler/test_handler_snappy.py
+++ b/tests/unittests/test_handler/test_handler_snappy.py
@@ -7,9 +7,9 @@ from cloudinit.config.cc_snap_config import (
 from cloudinit import (distros, helpers, cloud, util)
 from cloudinit.config.cc_snap_config import handle as snap_handle
 from cloudinit.sources import DataSourceNone
-from ..helpers import FilesystemMockingTestCase, mock
+from cloudinit.tests.helpers import FilesystemMockingTestCase, mock
 
-from .. import helpers as t_help
+from cloudinit.tests import helpers as t_help
 
 import logging
 import os
diff --git a/tests/unittests/test_handler/test_handler_spacewalk.py b/tests/unittests/test_handler/test_handler_spacewalk.py
index 28b5892..ddbf4a7 100644
--- a/tests/unittests/test_handler/test_handler_spacewalk.py
+++ b/tests/unittests/test_handler/test_handler_spacewalk.py
@@ -3,