← Back to team overview

sts-sponsors team mailing list archive

[Merge] ~dmascialino/maas-ci/+git/system-tests:allow_list_of_ppa into maas-ci:maas-submodules-sync

 

Diego Mascialino has proposed merging ~dmascialino/maas-ci/+git/system-tests:allow_list_of_ppa into maas-ci:maas-submodules-sync.

Commit message:
Allos list of PPAs

Requested reviews:
  MAAS Committers (maas-committers)

For more details, see:
https://code.launchpad.net/~dmascialino/maas-ci/+git/system-tests/+merge/433752

Our utils/gen_config.py generates a list of PPAs.

This MP fixes system-tests code to use it correctly.
-- 
Your team MAAS Committers is requested to review the proposed merge of ~dmascialino/maas-ci/+git/system-tests:allow_list_of_ppa into maas-ci:maas-submodules-sync.
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..a79f7fb
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,9 @@
+*.egg-info
+.idea
+.tox
+__pycache__
+config.yaml
+credentials.yaml
+junit*.xml
+sosreport
+systemtests*.log
diff --git a/.launchpad.yaml b/.launchpad.yaml
new file mode 100644
index 0000000..49238e5
--- /dev/null
+++ b/.launchpad.yaml
@@ -0,0 +1,10 @@
+---
+pipeline:
+  - lint
+
+jobs:
+  lint:
+    series: jammy
+    architectures: amd64
+    packages: [git, tox]
+    run: tox -e lint,mypy
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..2a998fc
--- /dev/null
+++ b/README.md
@@ -0,0 +1,115 @@
+# MAAS System Tests
+
+MAAS System Tests are run using `tox` which in turn uses `pytest` to execute the tests.
+
+# Setting up local environment
+
+MAAS System Tests uses LXD so if you don't have LXD installed, please do:
+
+```bash
+snap install lxd
+# don't forget to init lxd (defaults should work)
+lxd init
+```
+
+Once LXD is installed, review the contents of [configure_lxd.sh](lxd_configs/configure_lxd.sh) and run it.
+You might want to disable VM creation (e.g. if you are using Apple M1 and there is no nested hypervisor support in your Linux VM).
+
+```bash
+cd lxd_configs
+./configure_lxd.sh
+```
+
+Copy [config.yaml.sample](config.yaml.sample) to `config.yaml` and review the contents.
+
+```bash
+tox -e cog
+```
+
+## Run tests
+
+```bash
+tox -e env_builder,general_tests
+```
+
+You can also provide flags like `-vvv` for verbose log output or `--pdb` for debugger.
+
+## Test suites
+
+<!-- [[[cog
+import cog
+from pathlib import Path
+import pydoc
+import textwrap
+inits = sorted(Path('.').glob('systemtests/*/__init__.py'))
+print("Generating doc for {}.".format(', '.join(init.parent.name for init in inits)))
+cog.outl(f"We have {len(inits)} test suites:")
+for init in inits:
+	package = init.parent.name
+	module = pydoc.importfile(str(init))
+	docstring = textwrap.fill(module.__doc__, 80)
+	cog.outl(f" - `{package}`: {docstring}\n")
+]]] -->
+We have 4 test suites:
+ - `collect_sos_report`: Collect an SOS report from the test run.
+
+ - `env_builder`:  Prepares a container with a running MAAS ready to run the tests, and writes a
+credentials.yaml to allow a MAAS client to use it.
+
+ - `general_tests`:  Uses credentials.yaml info to access a running MAAS deployment, asserts the
+state is useful to these tests and executes them.
+
+ - `tests_per_machine`: Contains tests that are per machine, run in parallel by tox for efficiency.
+
+<!-- [[[end]]] -->
+
+## Environment variables
+
+The behaviour of the tests can be configured using environment variables
+
+| Name                                | Description                                                                   |
+| ----------------                    | -----------                                                                   |
+| `MAAS_SYSTEMTESTS_BUILD_CONTAINER`  | LXD container name to use for building MAAS, defaults to `maas-system-build`  |
+| `MAAS_SYSTEMTESTS_MAAS_CONTAINER`   | LXD container name to use for running MAAS, defaults to `maas-system-maas`    |
+| `MAAS_SYSTEMTESTS_CLIENT_CONTAINER` | LXD container name to use for running MAAS client, defaults to `maas-client`  |
+| `MAAS_SYSTEMTESTS_LXD_PROFILE`      | LXD profile for System Tests containers, defaults to `prof-maas-lab`          |
+
+## Containers
+
+The tests use three LXD containers:
+
+ * The build container
+ * The MAAS container
+ * The MAAS client
+
+As the name suggests, MAAS is built in the build container, and installed in the MAAS container, then is used by the MAAS client.
+
+The MAAS container is bridged on to the networks defined in `config.yaml` which is needed to allow MAAS in the MAAS container to provide DHCP to the network.
+
+If you want to do a clean run, delete existing containers first.
+
+TODO: Automate container removal, so we don't need to keep this up to date. Maybe tox target?
+```bash
+for param in maas-system-build maas-system-maas maas-client; do lxc delete --force $param; done
+```
+
+## Known issues
+
+### LXD container is not getting IP address and has no internet connection
+
+[Bug 1594317](https://bugs.launchpad.net/maas/+bug/1594317)
+
+Please check your LXD logs, if you see errors about `dnsmasq` like:
+```
+2022-04-26T14:06:44Z lxd.daemon[1041889]: time="2022-04-26T14:06:44Z" level=error msg="The dnsmasq process exited prematurely" driver=bridge err="Process exited with non-zero value 2" network=net-lab project=default stderr="dnsmasq: failed to create listening socket for 10.0.1.1: Address already in use"
+```
+
+Check that you don't have MAAS instance running on the same machine. If you do, stop it.
+You might also need to stop `named` service if you have it running.
+
+Ensure that DHCP is enabled for IPv4 for LXD network:
+```
+lxc network show lxdbr0
+```
+
+You should have `ipv4.dhcp: "true"` in the output. If you don't, add it via `lxc network edit lxdbr0`.
diff --git a/config.yaml.sample b/config.yaml.sample
new file mode 100644
index 0000000..e9648fe
--- /dev/null
+++ b/config.yaml.sample
@@ -0,0 +1,119 @@
+---
+# Proxy settings
+proxy:
+    use_internal: true
+# The list of subnets that will be part of the MAAS deployment. MAAS
+# will provide DHCP for all the listed networks.
+networks:
+    pxe:
+        cidr: 10.0.2.0/24
+        bridge: net-test
+        dynamic:
+            start: 10.0.2.20
+            end: 10.0.2.100
+        reserved:
+            - start: 10.0.2.1
+              end: 10.0.2.19
+              comment: Reserved
+            - start: 10.0.2.101
+              end: 10.0.2.200
+              comment: BMCs
+# The MAAS deployment. Currently only one container is created that has
+# both the region and rack in it. In the future we want to extend this
+# so that you can specify multiple containers, creating a HA setup, or
+# isolate the VLANs in use.
+maas:
+    networks:
+        pxe: 10.0.2.2
+    config:
+        # This values will set with `maas set-config` command
+        upstream_dns: 8.8.8.8
+        dnssec_validation: "no"
+    domain_name: systemtestsmaas
+
+
+machines:
+    hardware:
+        opelt:
+            mac_address: 18:66:da:6d:fb:3c
+            power_type: ipmi
+            power_parameters:
+                power_driver: LAN_2_0
+                power_address: 10.245.143.121
+                power_username: ...
+                power_password: ...
+        stunky:
+            osystem: windows  # optional, ubuntu is the default.
+            mac_address: ec:b1:d7:7f:ef:34
+            power_type: ipmi
+            power_parameters:
+                power_driver: LAN_2_0
+                power_address: 10.245.143.127
+                power_username: ...
+                power_password: ...
+    vms:
+        instances:
+            vm1:
+                mac_address: 00:16:3e:c6:fe:62
+                devices:
+                    disk1:
+                        type: disk
+                        source: /tmp/test-dir
+                        size: 1MB
+                        path: /mnt/ext2
+                    iface1:
+                        type: nic
+                        nictype: bridged
+                        parent: net-test
+                        name: eth1
+                        hwaddr: 00:16:3e:83:4e:4d
+                    usb1:
+                        type: usb
+                        productid: ff20
+                    pci1:
+                        type: pci
+                        address: '2d:00.3'
+        power_type: lxd
+        power_parameters:
+            power_address: https://10.0.1.1:8443
+            project: default
+            certificate: |
+                            -----BEGIN CERTIFICATE-----
+                            ...
+                            -----END CERTIFICATE-----
+            key: |
+                    -----BEGIN PRIVATE KEY-----
+                    ....
+                    -----END PRIVATE KEY-----
+
+deb:
+    ppa:
+        - ppa:maas-committers/latest-deps
+    git_repo: https://git.launchpad.net/maas
+    git_branch: master
+
+# If you want to test installing maas from a snap
+# Use this section.
+snap:
+    maas_channel: latest/edge
+    test_db_channel: latest/edge
+
+tls:
+    # maas installed via env_builder uses autosigned ssl-cert.
+    # If you are testing a remote MAAS with TLS enabled a cacert
+    # for that domain is required.
+    cacerts: ssl-cert-snakeoil.pem
+
+# To test MAAS vault integration
+vault:
+    snap-channel: 1.10/stable
+
+containers-image: ubuntu:20.04
+
+windows_image_file_path: >
+    /home/ubuntu/other-os-images/windows-win2012hvr2-amd64-root-dd
+
+o11y:
+    grafana_agent_file_path: >
+        /home/diego/canonical/agent-linux-amd64_0_24_2
+    o11y_ip: 10.245.136.5
diff --git a/credentials.yaml.sample b/credentials.yaml.sample
new file mode 100644
index 0000000..6f20c4a
--- /dev/null
+++ b/credentials.yaml.sample
@@ -0,0 +1,4 @@
+---
+region_url: http://10.245.136.7:5240/MAAS
+api_key: ...
+snap_channel: latest/edge
diff --git a/lxd_configs/configure_lxd.sh b/lxd_configs/configure_lxd.sh
new file mode 100644
index 0000000..27fe4e6
--- /dev/null
+++ b/lxd_configs/configure_lxd.sh
@@ -0,0 +1,35 @@
+#!/bin/bash
+set -e -x
+
+# This is a possible LXD configuration to use with system-test
+# is based on the minimal MAAS setup
+# https://discourse.maas.io/t/minimal-maas-setup/5543
+# Be careful to use it, exposes the LXD configuration port.
+
+lxc config set core.https_address [::]:8443
+
+lxc config trust add example.certificate
+
+
+lxc network create net-lab
+cat net_lab.network | lxc network edit net-lab
+
+lxc network create net-test
+cat net_test.network | lxc network edit net-test
+
+
+lxc profile create prof-maas-lab
+cat maas_lab.profile | lxc profile edit prof-maas-lab
+
+lxc profile create prof-maas-test
+cat maas_test.profile | lxc profile edit prof-maas-test
+
+
+lxc init vm1 --vm --empty -p prof-maas-test
+VM1_MAC_ADDRESS=$(lxc config get vm1 volatile.eth0.hwaddr)
+echo "VM1 mac_address is $VM1_MAC_ADDRESS"
+
+TEST_BLOCK_DEVICE="$(mktemp -d)/test_block_device.img"
+dd if=/dev/zero of=$TEST_BLOCK_DEVICE bs=1M count=10
+losetup -fP $TEST_BLOCK_DEVICE
+echo "Block device $(losetup -a | grep $TEST_BLOCK_DEVICE) available for testing"
diff --git a/lxd_configs/example.certificate b/lxd_configs/example.certificate
new file mode 100644
index 0000000..fd78b98
--- /dev/null
+++ b/lxd_configs/example.certificate
@@ -0,0 +1,27 @@
+-----BEGIN CERTIFICATE-----
+MIIElzCCAn8CEQD+6NBHtM1z951J2HwbtZwnMA0GCSqGSIb3DQEBDQUAMAAwHhcN
+MjIwMjA4MTMwNDI4WhcNMzIwMjA2MTMwNDI4WjATMREwDwYDVQQDDAh0ZXN0aG9z
+dDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANRvcoT8dofRU2A0ICm0
+JzZPJr9C7oQmJhxd/I8Fwzt1Jzom0mhg+8oSFXPI7jzx8kpFexJZHBF8WBUII+0v
+RnZx8oeIRtvrL22tKEqeH2dwZxHIpcmoPbj2jMCFR1IFhFGBJD1uDUvkBAYFnguM
+tJml5yh+BhhPGc8Ehds/kstq032rWlDjx0Cne+5mebSniz+TW5+uc/7Vy4mIDm3P
+IGgytRXDseryvJ0kuLkNZGT38tHTva22skz0K0mRImNgCdjbacsS85DQymANJtP/
+TbAZnZ2VGLGJ2jXVBEKcXDRdrT5jusQTdRseciXqbSBuvCcsd+4R85ZZNO4ARDqI
+XNWEEinspXfJtBuvx2dtZ1S8K233kz1ds5n3vLirw6LIeNW1b8a200JeDovBzUmt
+fu9qSgw+fS2H7k/TnQDzd1AqhcKQpZqZd2eY1d4AbM9L9oEPFPesISNko7U0NpPC
+5RfejxL+IMhshKDPWCo50I4Ty80gSPbzRBJUXE4Qq3ZiiqiK8EW0VRobz6ZfDF1A
+/2oR6C6QvNd30lzdP1laLLHKBh2A3TTWL4UE+N6eH38LdO2isbh/8ux068sclxzX
+Ba34D5ICzjHd4FpBHiNpyTnb8QAaKcOFLr2QGHogRCQI2QygPyB6jaVTTVwkJbKS
+kuJBcUjBNmkEnx1VFIqnzDr5AgMBAAEwDQYJKoZIhvcNAQENBQADggIBAL7aLr6Z
+Hfqlefng/JMr+ojDJWmdtHTqwa9MviEzHTlzhvTw3JRCMFlIlIrK7wpd7kr9Jmhz
+HI3atc1NtrEsMeUsvNmopnlqKWPwcyCYX0juKvoMeTqFDLN7yWPBHEc4NNhXJQte
+pee7P02e8+C0g4UXlQ8Hry1LfwhlwHok0LijEbBkX4x0m7RlqqprMfJJsU6wL613
+WOcfTEiHkZYB/a/yeIAhdqSi3jMH3m+jbG9Tjfn62pPheSzRpH28A5mvZVP39RUO
+JdpEHT7F5pMmauygWBxXT/VCU0TriCMYrexjA1cWOlrT0+VU5Im3+u0l3vUE+2Wc
+zKNpt3l3wG3KmLHeJr9/7QyFLMHhEYr4xpMoikIhq75iLz6GqGkgho7Xy7Iz2WZP
+YW39eNAYdW8UEnSpdlmsA0gIHeWDVWZFebMk8Jw5AtIr6qTUGRMp4jFzwaQX4Oc2
+kf2jqICMKz3nQjT5b5+/w6OKuz1POrzA9hRkp85fDdeD/zwO9v+65qwHk9T+FOM4
+xC/VQcoSp5lJmGN6NGCleLhpUbIwX381rCk7pvOPQ5PLWW2gPB61njsdbJ97fWpy
+5b7QD4Pcq+eQ2Xy1+F4ToCmW8E6XorUFPUKTInTl4McHbJiuGwmXAkGqLzgJd+Q1
+opB3iqj1W36wgZQK6lRoN7X/zugt9m+MZX/6
+-----END CERTIFICATE-----
diff --git a/lxd_configs/example.key b/lxd_configs/example.key
new file mode 100644
index 0000000..541e2de
--- /dev/null
+++ b/lxd_configs/example.key
@@ -0,0 +1,52 @@
+-----BEGIN PRIVATE KEY-----
+MIIJQgIBADANBgkqhkiG9w0BAQEFAASCCSwwggkoAgEAAoICAQDUb3KE/HaH0VNg
+NCAptCc2Tya/Qu6EJiYcXfyPBcM7dSc6JtJoYPvKEhVzyO488fJKRXsSWRwRfFgV
+CCPtL0Z2cfKHiEbb6y9trShKnh9ncGcRyKXJqD249ozAhUdSBYRRgSQ9bg1L5AQG
+BZ4LjLSZpecofgYYTxnPBIXbP5LLatN9q1pQ48dAp3vuZnm0p4s/k1ufrnP+1cuJ
+iA5tzyBoMrUVw7Hq8rydJLi5DWRk9/LR072ttrJM9CtJkSJjYAnY22nLEvOQ0Mpg
+DSbT/02wGZ2dlRixido11QRCnFw0Xa0+Y7rEE3UbHnIl6m0gbrwnLHfuEfOWWTTu
+AEQ6iFzVhBIp7KV3ybQbr8dnbWdUvCtt95M9XbOZ97y4q8OiyHjVtW/GttNCXg6L
+wc1JrX7vakoMPn0th+5P050A83dQKoXCkKWamXdnmNXeAGzPS/aBDxT3rCEjZKO1
+NDaTwuUX3o8S/iDIbISgz1gqOdCOE8vNIEj280QSVFxOEKt2YoqoivBFtFUaG8+m
+XwxdQP9qEegukLzXd9Jc3T9ZWiyxygYdgN001i+FBPjenh9/C3TtorG4f/LsdOvL
+HJcc1wWt+A+SAs4x3eBaQR4jack52/EAGinDhS69kBh6IEQkCNkMoD8geo2lU01c
+JCWykpLiQXFIwTZpBJ8dVRSKp8w6+QIDAQABAoICAQCxuqQHGulX7AtjW3jlKzH7
+P/Fc5vSCXyBXb1KTnfCe1/7/qeczKKC/iK2l9x9KoelhtgunaCIRhwRyZCMalwjO
+o7qTJbKS34sIqWwiMXR4qBOzTzlVI4qwKqXLlDX9K1xujCrzshUxvwyWtTBq3Udj
+nOduezFCOTuQdWo/6ko4IaHba/bd4hObxgPripScTeg0QmbPi7bEJ75nzAq2WCn2
+wyW5lcZOmNKwbj6Vo9ywlLj0T8BLi6RUuZtVqzUoCvtyEO/L1IkuSWBnR9mKV/h5
+MpUpd8n3DywfCZ7M0+BYd18v6WQiE11QWQKLMjwmfD6yT4PvC9nNmcisrlBm4Bs5
+h9ESg7/JrfiXdJPa7JJ4JH01/gvQik6BAp+NhZEOg05mi81CE7ECchyDTzjbk5HS
+xUbdiTVSWO1zy4mADmNqMs6JgF4JNiSg652sl96nZsXuZ13iQW1I8Y0jR1HCC+Xt
++HgYuSpriL3THyaQX8vWumkhB4AL3jU1/Vh64OoQQfcDJi6IpSiSM+Gs7E8uBAWi
+aILf4dxB9liQCv/GYvhFQ7eOx+bcg6orp0hmqL809KAAoe6siT6CVTWa7n7nZWDS
+pH4sNXnwwC4cqEwFjmCNXzTv0+rWR0lKc02s5aZbCk7L34O10XdeqP077KXS87+v
+DJC1yI7JkOQUfB2ztzj7AQKCAQEA/q0jX7+wbFtn3bQKdlp+qHznaXgyorVFoR7p
+0rarF9oP0C2a3hdGoQMANosg/1fpJHmVHrLMVGtBuqhrw8RCjlqe9PxlKeoKnR6I
+AWcmt8D45UuucJ1fatsB9SfBasCaLXZZrmCF6bYzm3c4Y4J6uxircsHViMGJWsaA
+grw5kPtRbPSCWofYCM0G1x9J+P/ymgZOzGwPtpmzUHXHbN3WVNTl9K9nEwayveUj
+QbzSnw9+CyWPFUubhjnap9pboAM7jbRbP71I0u/EboHq/3BeTLuxRTEp0imtvxHo
+LTFtFu8Jo9tKg4XUJQHFzuRueG39dK2bl4HtnT7aIfXCX67ZEQKCAQEA1Yoa5Nmw
+VDs8CYcScjh4GsWzPZ0GA6NMA1G7I7k71ixWS/VNQ0GxYAZ3y6ItZz4pkKZ/LFPR
+WVAak+SR7+7ZxiYFPJ8WnqKJZxvzjr7+Bh3YaFSek0P+sr2OXlz7uO44Kzo0RvMG
+6STb+8vhZpDnpMk6TvUWf7FTnYLAoi8kHDhjsJTa3+E5wujr3mnps/vQQyBQrY7s
+2gU+5znvve77v5yl1zYHT0gWimdJxMZp4cd4Hiqr3kMME12R+V/XtWz+LgeBViXA
+Z0T6Tnc2+bsaestFSNe4dTuIAwoGM2pDmXc874/DwB4piO4QOM2p/ZOLdmYUmlPQ
+ogBYkGMIw+oDaQKCAQAwxOsPPOAGAAMF26JdQ7sZfMG72r6nldr9nbPdHAnriWCZ
+1wHfIcnur2ptB3uMKkOFLps1w7uJNvjhS7tHQ+AS7pueAm9E9YKOz/fvfNdXPObs
+0e9XtWs+RS48yh4p2TQtHIrT77v1I2UCknQD6kqiZXj/gsrnY1hwP68AWhcUAmx3
+VuNXfsgJ92kl7OH3gtvsTuTsFI11xD0oXUWRPXH70MEweB5e8FtuLeDwh741o3vZ
+mpmp1E62B4ItvozpOXVAD5ehvxeg/TU6jDp6LASC4TZzL5T4n+6btkwly18+kwvf
+ivDb+tbDN3GvyuK0wStWGqC/BKyB/jU7Z5qPRCZhAoIBABUVoNgt4mo+uwvZyWl7
+x+gk0zDnOzvKuOuu+0JovM7F6/NuEiXs652mpdd2ePMzwRjmR7JRyF8AOM+Xhw1g
+0SHuiR/WOX6KX/TNXrwegaiK895BVLMHyLNPYipRFg3Jf8RM5/KFdo44tHvlQqlE
+74pm0BoRuxn6oV3xFiItc2xR6Q37dK0caP6kzv1UCd5ao9Ks8ypf7WUNlYtxPgnL
++hGOXxWj4Q7j+E3MKw2B5dyEPIkF/5hfmGalG4+69eqVC3fyB8RA0AGiXvC2drgr
+0E6FmZ66phz1NtXN/JTBDlGt41doI5TppYI+t11UeU9vbRrQs4IVeok0bYo8LRZj
+GdkCggEAf5myYEBvr8tuhE100Tau5RoKLS0SoKv7qsDpmFyUiNBmQggecB+X+ck4
+3/IToBmZRnzu+Fubqt1yMEKv+SO4F3s1W5sLdQsSAzPcum0fSEf94wONZXGkhntB
+0rsHnPTd3APbi1c+ZFQ+DV/Krx5yYGW1J8BBQLvFFuvJ8rK4Q+WCejGI8QCfrVU0
+95ctKugep8MTzm2xIecqKQOqx7UY9zWil/o4p9HEyzqzkG05+RQPLoAW9kmtcZZt
+HXvIdwLsUASyKB/LlM6dESXr9K4r/im3sn+9oza9mbwsRnBIJ9YU9llTw9j5aUlw
+Xwjk99gnvag+HswBDB7A7DCTEqhkUQ==
+-----END PRIVATE KEY-----
diff --git a/lxd_configs/maas_lab.profile b/lxd_configs/maas_lab.profile
new file mode 100644
index 0000000..bb65471
--- /dev/null
+++ b/lxd_configs/maas_lab.profile
@@ -0,0 +1,15 @@
+name: prof-maas-lab
+description: MAAS lab env
+devices:
+    eth0:
+        type: nic
+        name: eth0
+        network: net-lab
+    eth1:
+        type: nic
+        name: eth1
+        network: net-test
+    root:
+        path: /
+        pool: default
+        type: disk
diff --git a/lxd_configs/maas_test.profile b/lxd_configs/maas_test.profile
new file mode 100644
index 0000000..d7554d7
--- /dev/null
+++ b/lxd_configs/maas_test.profile
@@ -0,0 +1,15 @@
+name: prof-maas-test
+config:
+  limits.memory: "2147483648"
+  security.secureboot: "false"
+description: MAAS lab vms
+devices:
+  eth0:
+    boot.priority: "1"
+    name: eth0
+    network: net-test
+    type: nic
+  root:
+    path: /
+    pool: default
+    type: disk
diff --git a/lxd_configs/net_lab.network b/lxd_configs/net_lab.network
new file mode 100644
index 0000000..5b3c6d5
--- /dev/null
+++ b/lxd_configs/net_lab.network
@@ -0,0 +1,10 @@
+name: net-lab
+description: ""
+type: bridge
+config:
+  dns.domain: net-lab
+  ipv4.address: 10.0.1.1/24
+  ipv4.dhcp: "true"
+  ipv4.dhcp.ranges: 10.0.1.16-10.0.1.31
+  ipv4.nat: "true"
+  ipv6.address: none
diff --git a/lxd_configs/net_test.network b/lxd_configs/net_test.network
new file mode 100644
index 0000000..70b7e5d
--- /dev/null
+++ b/lxd_configs/net_test.network
@@ -0,0 +1,9 @@
+name: net-test
+type: bridge
+description: ""
+config:
+  ipv4.address: 10.0.2.1/24
+  ipv4.dhcp: "false"
+  ipv4.nat: "true"
+  ipv6.address: none
+  ipv6.nat: "true"
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 0000000..251bf95
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,17 @@
+[tool.pytest.ini_options]
+addopts = "--strict-markers --durations=10"
+markers = [
+    "skip_if_installed_from_snap", # Skips tests if the MAAS is installed as a snap.
+    "skip_if_installed_from_deb_package" # Skips tests if the MASS is installed from package.
+]
+log_level = "INFO"
+log_file = "systemtests.log"
+log_format = "%(asctime)s %(levelname)s %(name)s: %(message)s"
+log_date_format = "%Y-%m-%d %H:%M:%S"
+log_file_format = "%(asctime)s %(levelname)s %(name)s: %(message)s"
+log_file_date_format = "%Y-%m-%d %H:%M:%S"
+
+[tool.mypy]
+install_types = true
+non_interactive = true
+strict = true
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000..7554e43
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,8 @@
+black
+cogapp
+isort
+flake8
+flake8-black
+yamllint
+
+
diff --git a/setup.py b/setup.py
new file mode 100644
index 0000000..21d75cd
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,21 @@
+from setuptools import find_packages, setup
+
+install_requires = (
+    'paramiko',
+    'pytest',
+    'pytest-dependency',
+    'pytest-rerunfailures',
+    'pytest-steps',
+    'pyyaml',
+    'retry',
+    'ruamel.yaml',
+)
+
+
+setup(
+    name='maas-system-tests',
+    version='0.1',
+    description='System tests for MAAS',
+    packages=find_packages(include=['systemtests']),
+    install_requires=install_requires,
+)
diff --git a/stubs/pytest_steps/__init__.pyi b/stubs/pytest_steps/__init__.pyi
new file mode 100644
index 0000000..3357267
--- /dev/null
+++ b/stubs/pytest_steps/__init__.pyi
@@ -0,0 +1,2 @@
+from pytest_steps.steps import test_steps as test_steps
+from pytest_steps.steps_generator import one_fixture_per_step as one_fixture_per_step
diff --git a/stubs/pytest_steps/steps.pyi b/stubs/pytest_steps/steps.pyi
new file mode 100644
index 0000000..11e88be
--- /dev/null
+++ b/stubs/pytest_steps/steps.pyi
@@ -0,0 +1,7 @@
+from __future__ import annotations
+
+from typing import Any, Callable, TypeVar
+
+F = TypeVar("F", bound=Callable[..., Any])
+
+def test_steps(*steps: str, **kwargs: dict[str, str]) -> Callable[[F], F]: ...
diff --git a/stubs/pytest_steps/steps_generator.pyi b/stubs/pytest_steps/steps_generator.pyi
new file mode 100644
index 0000000..58a1a36
--- /dev/null
+++ b/stubs/pytest_steps/steps_generator.pyi
@@ -0,0 +1,8 @@
+from contextlib import contextmanager
+from typing import Any, Callable, Iterator, TypeVar
+
+F = TypeVar("F", bound=Callable[..., Any])
+
+def one_fixture_per_step(func: F) -> F: ...
+@contextmanager
+def optional_step(step: str, depends_on: Any = None) -> Iterator[Any]: ...
diff --git a/stubs/retry/__init__.pyi b/stubs/retry/__init__.pyi
new file mode 100644
index 0000000..0cf7651
--- /dev/null
+++ b/stubs/retry/__init__.pyi
@@ -0,0 +1 @@
+from .api import retry as retry
diff --git a/stubs/retry/api.pyi b/stubs/retry/api.pyi
new file mode 100644
index 0000000..14cba1a
--- /dev/null
+++ b/stubs/retry/api.pyi
@@ -0,0 +1,32 @@
+from __future__ import annotations
+
+from logging import Logger
+from typing import Any, Callable, Optional, TypeVar, Union
+
+logging_logger: Logger
+
+Number = Union[int, float]
+
+F = TypeVar("F", bound=Callable[..., Any])
+
+def retry(
+    exceptions: type = ...,
+    tries: int = ...,
+    delay: Number = ...,
+    max_delay: Number | None = ...,
+    backoff: Number = ...,
+    jitter: Number = ...,
+    logger: Optional[Logger] = ...,
+) -> Callable[[F], F]: ...
+def retry_call(
+    f: F,
+    fargs: Any | None = ...,
+    fkwargs: Any | None = ...,
+    exceptions: type = ...,
+    tries: int = ...,
+    delay: Number = ...,
+    max_delay: Number | None = ...,
+    backoff: Number = ...,
+    jitter: Number = ...,
+    logger: Optional[Logger] = ...,
+) -> F: ...
diff --git a/stubs/ruamel/__init__.pyi b/stubs/ruamel/__init__.pyi
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/stubs/ruamel/__init__.pyi
diff --git a/stubs/ruamel/yaml/__init__.pyi b/stubs/ruamel/yaml/__init__.pyi
new file mode 100644
index 0000000..4212709
--- /dev/null
+++ b/stubs/ruamel/yaml/__init__.pyi
@@ -0,0 +1,58 @@
+from pathlib import Path
+from typing import Any, Optional, Text, Union
+
+StreamType = Any
+StreamTextType = StreamType
+
+class YAML:
+    typ: Any
+    pure: Any
+    plug_ins: Any
+    Resolver: Any
+    allow_unicode: bool
+    Reader: Any
+    Representer: Any
+    Constructor: Any
+    Scanner: Any
+    Serializer: Any
+    default_flow_style: Any
+    comment_handling: Any
+    Emitter: Any
+    Parser: Any
+    Composer: Any
+    stream: Any
+    canonical: Any
+    old_indent: Any
+    width: Any
+    line_break: Any
+    map_indent: Any
+    sequence_indent: Any
+    sequence_dash_offset: int
+    compact_seq_seq: Any
+    compact_seq_map: Any
+    sort_base_mapping_type_on_output: Any
+    top_level_colon_align: Any
+    prefix_colon: Any
+    version: Any
+    preserve_quotes: Any
+    allow_duplicate_keys: bool
+    encoding: str
+    explicit_start: Any
+    explicit_end: Any
+    tags: Any
+    default_style: Any
+    top_level_block_style_scalar_no_indent_error_1_1: bool
+    scalar_after_indicator: Any
+    brace_single_entry_mapping_in_flow_sequence: bool
+    def __init__(
+        self,
+        *,
+        typ: Optional[Text] = ...,
+        pure: Any = ...,
+        output: Any = ...,
+        plug_ins: Any = ...
+    ) -> None: ...
+    def load(self, stream: Union[Path, StreamTextType]) -> Any: ...
+    def dump(
+        self, data: Union[Path, StreamType], stream: Any = ..., *, transform: Any = ...
+    ) -> Any: ...
diff --git a/systemtests/__init__.py b/systemtests/__init__.py
new file mode 100644
index 0000000..77623bb
--- /dev/null
+++ b/systemtests/__init__.py
@@ -0,0 +1,9 @@
+import pytest
+
+pytest.register_assert_rewrite(
+    "systemtests.api",
+    "systemtests.region",
+    "systemtests.state",
+    "systemtests.subprocess",
+    "systemtests.utils",
+)
diff --git a/systemtests/api.py b/systemtests/api.py
new file mode 100644
index 0000000..5e164f0
--- /dev/null
+++ b/systemtests/api.py
@@ -0,0 +1,703 @@
+from __future__ import annotations
+
+import json
+from functools import partial
+from logging import getLogger
+from subprocess import CalledProcessError
+from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, TypedDict, Union
+
+from .utils import wait_for_machine
+
+if TYPE_CHECKING:
+    from logging import Logger
+
+    from . import lxd
+
+LOG = getLogger("systemtests.api")
+
+# Certs must be accessible for MAAS installed by snap, but
+# this location is useful also when installed via deb package.
+MAAS_CONTAINER_CERTS_PATH = "/var/snap/maas/common/certs/"
+
+
+class CannotDeleteError(Exception):
+    pass
+
+
+class Image(TypedDict):
+    name: str
+    architecture: str
+    subarches: list[str]
+
+
+class BootImages(TypedDict):
+    images: list[Image]
+    connected: bool
+    status: str
+
+
+class Version(TypedDict):
+    capabilities: list[str]
+    version: str
+    subversion: str
+
+
+class _NamedEntity(TypedDict):
+    name: str
+    description: str
+    id: int
+    resource_uri: str
+
+
+class Zone(_NamedEntity):
+    pass
+
+
+class ResourcePool(_NamedEntity):
+    pass
+
+
+class Tag(TypedDict):
+    name: str
+    definition: str
+    comment: str
+    kernel_opts: str
+    resource_uri: str
+
+
+class PowerResponse(TypedDict):
+    state: str
+
+
+# TODO: Expand these to TypedDict matching API response structure
+
+Subnet = Dict[str, Any]
+RackController = Dict[str, Any]
+IPRange = Dict[str, Any]
+VLAN = Dict[str, Any]
+Machine = Dict[str, Any]
+SSHKey = Dict[str, Any]
+Space = Dict[str, Any]
+Fabric = Dict[str, Any]
+
+# End TODO
+
+
+class UnauthenticatedMAASAPIClient:
+
+    MAAS_CMD = ["sudo", "-u", "ubuntu", "maas"]
+
+    def __init__(self, url: str, maas_container: str, lxd: lxd.CLILXD):
+        self.url = url
+        self.maas_container = maas_container
+        self.lxd = lxd
+        self.pull_file = partial(lxd.pull_file, maas_container)
+        self.push_file = partial(lxd.push_file, maas_container)
+
+    def __repr__(self) -> str:
+        return f"<UnauthenticatedMAASAPIClient for {self.url!r}>"
+
+    @property
+    def logger(self) -> Logger:
+        return self.lxd.logger
+
+    @logger.setter
+    def logger(self, logger: Logger) -> None:
+        self.lxd.logger = logger
+
+    def execute(self, cmd: list[str], base_cmd: Optional[list[str]] = None) -> str:
+        __tracebackhide__ = True
+        if base_cmd is None:
+            base_cmd = self.MAAS_CMD
+        result = self.lxd.execute(self.maas_container, base_cmd + cmd)
+        return result.stdout
+
+    def quietly_execute(self, cmd: list[str]) -> str:
+        __tracebackhide__ = True
+        result = self.lxd.quietly_execute(self.maas_container, self.MAAS_CMD + cmd)
+        return result.stdout
+
+    def log_in(self, session: str, token: str) -> tuple[str, AuthenticatedAPIClient]:
+        cmd = ["login", session, self.url, token]
+        if self.url.startswith("https://";):
+            cmd += ["--cacerts", f"{MAAS_CONTAINER_CERTS_PATH}cacerts.pem"]
+        output = self.execute(
+            cmd, base_cmd=["sudo", "-u", "ubuntu", "timeout", "5m", "maas"]
+        )
+        return output, AuthenticatedAPIClient(self, session)
+
+
+class AuthenticatedAPIClient:
+    def __init__(self, api_client: UnauthenticatedMAASAPIClient, session: str):
+        self.api_client = api_client
+        self.session = session
+
+    def __repr__(self) -> str:
+        return f"<AuthenticatedAPIClient for {self.api_client.url!r}>"
+
+    @property
+    def lxd(self) -> lxd.CLILXD:
+        return self.api_client.lxd
+
+    @property
+    def logger(self) -> Logger:
+        return self.api_client.logger
+
+    @logger.setter
+    def logger(self, logger: Logger) -> None:
+        self.api_client.logger = logger
+
+    def _execute(self, cmd: list[str]) -> str:
+        __tracebackhide__ = True
+        return self.api_client.execute([self.session] + cmd)
+
+    def execute(
+        self,
+        cmd: list[str],
+        extra_params: Optional[dict[str, str]] = None,
+        json_output: bool = True,
+    ) -> Any:
+        __tracebackhide__ = True
+        if extra_params:
+            cmd.extend([f"{k}={v}" for (k, v) in extra_params.items()])
+        output = self._execute(cmd)
+        if json_output:
+            try:
+                output = json.loads(output)
+            except json.JSONDecodeError as err:
+                self.logger.error(f"{output=} {err}")
+                raise
+        return output
+
+    def list_subnets(self) -> list[Subnet]:
+        subnets: list[Subnet] = self.execute(["subnets", "read"])
+        return subnets
+
+    def create_subnet(self, name: str, cidr: str) -> Subnet:
+        subnet: Subnet = self.execute(
+            ["subnets", "create", "name=" + name, "cidr=" + cidr]
+        )
+        return subnet
+
+    def delete_subnet(self, name: str) -> str:
+        result: str = self.execute(["subnet", "delete", name], json_output=False)
+        return result
+
+    def list_rack_controllers(self) -> list[RackController]:
+        rack_controllers: list[RackController] = self.execute(
+            ["rack-controllers", "read"]
+        )
+        return rack_controllers
+
+    def list_ip_ranges(self) -> list[IPRange]:
+        ip_ranges: list[IPRange] = self.execute(["ipranges", "read"])
+        return ip_ranges
+
+    def delete_ip_range(self, range_id: Union[str, int]) -> str:
+        result: str = self.execute(
+            ["iprange", "delete", str(range_id)], json_output=False
+        )
+        return result
+
+    def create_ip_range(
+        self, start: str, end: str, range_type: str, comment: Optional[str] = None
+    ) -> IPRange:
+        comment_arg = [f"comment={comment}"] if comment else []
+        ip_range: IPRange = self.execute(
+            [
+                "ipranges",
+                "create",
+                f"type={range_type}",
+                f"start_ip={start}",
+                f"end_ip={end}",
+            ]
+            + comment_arg
+        )
+        return ip_range
+
+    def enable_dhcp(self, fabric: str, vlan: str, primary_rack: RackController) -> VLAN:
+        vlan_obj: VLAN = self.execute(
+            [
+                "vlan",
+                "update",
+                fabric,
+                vlan,
+                "dhcp_on=True",
+                f"primary_rack={primary_rack['system_id']}",
+            ]
+        )
+        return vlan_obj
+
+    def disable_dhcp(self, fabric: str, vlan: str) -> VLAN:
+        vlan_obj: VLAN = self.execute(["vlan", "update", fabric, vlan, "dhcp_on=False"])
+        return vlan_obj
+
+    def is_dhcp_enabled(self) -> bool:
+        for controller in self.list_rack_controllers():
+            for interface in controller["interface_set"]:
+                for link in interface["links"]:
+                    subnet = link.get("subnet")
+                    if subnet and subnet["vlan"]["dhcp_on"]:
+                        return True
+        return False
+
+    def create_boot_resource(
+        self,
+        name: str,
+        title: str,
+        architecture: str,
+        filetype: str,
+        image_file_path: str,
+    ) -> None:
+        cmd = [
+            "boot-resources",
+            "create",
+            f"name={name}",
+            f"title={title}",
+            f"architecture={architecture}",
+            f"filetype={filetype}",
+            f"content@={image_file_path}",
+        ]
+        self.execute(cmd, json_output=False)
+
+    def import_boot_resources(self) -> str:
+        result: str = self.execute(["boot-resources", "import"], json_output=False)
+        return result
+
+    def stop_importing_boot_resources(self) -> str:
+        result: str = self.execute(["boot-resources", "stop-import"], json_output=False)
+        return result
+
+    def is_importing_boot_resources(self) -> bool:
+        result: bool = self.execute(["boot-resources", "is-importing"])
+        return result
+
+    def list_machines(self, **kwargs: str) -> list[Machine]:
+        """
+        machines read -h to know parameters available for kwargs
+        """
+        machines: list[Machine] = self.execute(
+            ["machines", "read"], extra_params=kwargs
+        )
+        return machines
+
+    def read_version_information(self) -> dict[str, Any]:
+        version: dict[str, Any] = self.execute(["version", "read"])
+        return version
+
+    #  FIXME: Only system_id is needed, we have to uniform parameter with other API
+    #  methods.
+    def read_machine(self, machine: Machine) -> Machine:
+        current_machine: Machine = self.execute(
+            ["machine", "read", machine["system_id"]]
+        )
+        return current_machine
+
+    def update_machine(self, machine: Machine, **kwargs: str) -> Machine:
+        current_machine: Machine = self.execute(
+            ["machine", "update", machine["system_id"]], extra_params=kwargs
+        )
+        return current_machine
+
+    def list_boot_images(self, rack_controller: RackController) -> BootImages:
+        boot_images: BootImages = self.execute(
+            ["rack-controller", "list-boot-images", rack_controller["system_id"]]
+        )
+        return boot_images
+
+    def import_boot_resources_in_rack(self, rack_controller: RackController) -> str:
+        result: str = self.execute(
+            ["rack-controller", "import-boot-images", rack_controller["system_id"]],
+            json_output=False,
+        )
+        return result
+
+    def commission_machine(self, machine: Machine) -> Machine:
+        result: Machine = self.execute(["machine", "commission", machine["system_id"]])
+        assert result["status_name"] == "Commissioning"
+        return result
+
+    def deploy_machine(self, machine: Machine, **kwargs: str) -> Machine:
+        result: Machine = self.execute(
+            ["machine", "deploy", machine["system_id"]], extra_params=kwargs
+        )
+        assert result["status_name"] == "Deploying"
+        if expected_osystem := kwargs.get("osystem"):
+            assert result["osystem"] == expected_osystem
+        return result
+
+    def create_ssh_key(self, public_key: str) -> SSHKey:
+        result: SSHKey = self.execute(["sshkeys", "create", f"key={public_key}"])
+        assert result["key"] == public_key
+        return result
+
+    def delete_ssh_key(self, ssh_key: SSHKey) -> str:
+        result: str = self.execute(
+            ["sshkey", "delete", str(ssh_key["id"])], json_output=False
+        )
+        return result
+
+    def release_machine(self, machine: Machine) -> Machine:
+        system_id: str = machine["system_id"]
+        result: Machine = self.execute(["machine", "release", system_id])
+        assert result["status_name"] in ("Releasing", "Ready")
+        return result
+
+    def delete_machine(self, machine: Machine) -> str:
+        result: str = self.execute(
+            ["machine", "delete", machine["system_id"]], json_output=False
+        )
+        return result
+
+    def update_architectures_in_boot_source_selections(
+        self, architectures: Iterable[str]
+    ) -> None:
+        arches = [f"arches={arch}" for arch in architectures]
+        boot_sources = self.execute(["boot-sources", "read"])
+        for boot_source in boot_sources:
+            for selection in self.execute(
+                ["boot-source-selections", "read", str(boot_source["id"])]
+            ):
+                self.execute(
+                    [
+                        "boot-source-selection",
+                        "update",
+                        str(boot_source["id"]),
+                        str(selection["id"]),
+                        *arches,
+                    ]
+                )
+
+    def rescue_machine(self, machine: Machine) -> None:
+        result = self.execute(["machine", "rescue-mode", machine["system_id"]])
+        assert result["status_name"] == "Entering rescue mode"
+        wait_for_machine(
+            self,
+            machine,
+            status="Rescue mode",
+            abort_status="Failed to enter rescue mode",
+            timeout=20 * 60,
+            delay=30,
+        )
+
+    def exit_rescue_machine(self, machine: Machine, next_status: str) -> None:
+        result = self.execute(["machine", "exit-rescue-mode", machine["system_id"]])
+        assert result["status_name"] == "Exiting rescue mode"
+        wait_for_machine(
+            self,
+            machine,
+            status=next_status,
+            abort_status="Failed to exit rescue mode",
+            timeout=20 * 60,
+            delay=30,
+        )
+
+    def get_or_create_zone(self, name: str, description: str) -> Zone:
+        try:
+            zone: Zone = self.execute(
+                ["zones", "create", f"name={name}", f"description={description}"]
+            )
+        except CalledProcessError as err:
+            if "Name already exists" in err.stdout:
+                zone = next(zone for zone in self.list_zones() if zone["name"] == name)
+            else:
+                raise
+
+        return zone
+
+    def list_zones(self) -> list[Zone]:
+        zones: list[Zone] = self.execute(["zones", "read"])
+        return zones
+
+    def read_zone(self, zone_name: str) -> Zone:
+        zone: Zone = self.execute(["zone", "read", zone_name])
+        return zone
+
+    def update_zone(
+        self,
+        zone_name: str,
+        new_name: Optional[str] = None,
+        new_description: Optional[str] = None,
+    ) -> Zone:
+        cmd = ["zone", "update", zone_name]
+        if new_name is not None:
+            cmd.append(f"name={new_name}")
+        if new_description is not None:
+            cmd.append(f"description={new_description}")
+
+        zone: Zone = self.execute(cmd)
+        return zone
+
+    def delete_zone(self, zone_name: str) -> str:
+        try:
+            result: str = self.execute(["zone", "delete", zone_name], json_output=False)
+        except CalledProcessError as err:
+            if "cannot be deleted" in err.stdout:
+                raise CannotDeleteError(err.stdout)
+            else:
+                raise
+        else:
+            return result
+
+    def get_or_create_pool(self, name: str, description: str) -> ResourcePool:
+        try:
+            pool: ResourcePool = self.execute(
+                [
+                    "resource-pools",
+                    "create",
+                    f"name={name}",
+                    f"description={description}",
+                ]
+            )
+        except CalledProcessError as err:
+            if "Name already exists" in err.stdout:
+                pool = next(pool for pool in self.list_pools() if pool["name"] == name)
+            else:
+                raise
+        return pool
+
+    def list_pools(self) -> list[ResourcePool]:
+        resource_pools: list[ResourcePool] = self.execute(["resource-pools", "read"])
+        return resource_pools
+
+    def read_pool(self, pool: ResourcePool) -> ResourcePool:
+        resource_pool: ResourcePool = self.execute(
+            ["resource-pool", "read", str(pool["id"])]
+        )
+        return resource_pool
+
+    def update_pool(
+        self,
+        pool: ResourcePool,
+        new_name: Optional[str] = None,
+        new_description: Optional[str] = None,
+    ) -> ResourcePool:
+        cmd = ["resource-pool", "update", str(pool["id"])]
+        if new_name is not None:
+            cmd.append(f"name={new_name}")
+        if new_description is not None:
+            cmd.append(f"description={new_description}")
+
+        pool_obj: ResourcePool = self.execute(cmd)
+        return pool_obj
+
+    def delete_pool(self, pool: ResourcePool) -> str:
+        try:
+            result: str = self.execute(
+                ["resource-pool", "delete", str(pool["id"])], json_output=False
+            )
+            return result
+        except CalledProcessError as err:
+            if "cannot be deleted" in err.stdout:
+                raise CannotDeleteError(err.stdout)
+            else:
+                raise
+
+    def get_or_create_space(self, name: str, description: str) -> Space:
+        try:
+            space: Space = self.execute(
+                ["spaces", "create", f"name={name}", f"description={description}"]
+            )
+        except CalledProcessError as err:
+            if "Name already exists" in err.stdout:
+                space = next(
+                    space for space in self.list_spaces() if space["name"] == name
+                )
+            else:
+                raise
+        return space
+
+    def list_spaces(self) -> list[Space]:
+        spaces: list[Space] = self.execute(["spaces", "read"])
+        return spaces
+
+    def read_space(self, space: Space) -> Space:
+        space_obj: Space = self.execute(["space", "read", str(space["id"])])
+        return space_obj
+
+    def update_space(self, space: Space, new_name: Optional[str] = None) -> Space:
+        cmd = ["space", "update", str(space["id"])]
+        if new_name is not None:
+            cmd.append(f"name={new_name}")
+
+        space_obj: Space = self.execute(cmd)
+        return space_obj
+
+    def delete_space(self, space: Space) -> str:
+        try:
+            result: str = self.execute(
+                ["space", "delete", str(space["id"])], json_output=False
+            )
+        except CalledProcessError as err:
+            if "cannot be deleted" in err.stdout:
+                raise CannotDeleteError(err.stdout)
+            else:
+                raise
+        else:
+            return result
+
+    def get_or_create_fabric(self, name: str, description: str) -> Fabric:
+        try:
+            fabric: Fabric = self.execute(
+                ["fabrics", "create", f"name={name}", f"description={description}"]
+            )
+        except CalledProcessError as err:
+            if "Name already exists" in err.stdout:
+                fabric = next(
+                    fabric for fabric in self.list_fabrics() if fabric["name"] == name
+                )
+            else:
+                raise
+        return fabric
+
+    def list_fabrics(self) -> list[Fabric]:
+        fabrics: list[Fabric] = self.execute(["fabrics", "read"])
+        return fabrics
+
+    def read_fabric(self, fabric: Fabric) -> Fabric:
+        fabric_obj: Fabric = self.execute(["fabric", "read", str(fabric["id"])])
+        return fabric_obj
+
+    def update_fabric(self, fabric: Fabric, new_name: Optional[str] = None) -> Fabric:
+        cmd = ["fabric", "update", str(fabric["id"])]
+        if new_name is not None:
+            cmd.append(f"name={new_name}")
+
+        fabric_obj: Fabric = self.execute(cmd)
+        return fabric_obj
+
+    def delete_fabric(self, fabric: Fabric) -> str:
+        try:
+            result: str = self.execute(
+                ["fabric", "delete", str(fabric["id"])], json_output=False
+            )
+        except CalledProcessError as err:
+            if "cannot be deleted" in err.stdout:
+                raise CannotDeleteError(err.stdout)
+            else:
+                raise
+        else:
+            return result
+
+    def create_vlan(
+        self, fabric_id: int, name: str, vid: int, description: str
+    ) -> VLAN:
+        vlan: VLAN = self.execute(
+            [
+                "vlans",
+                "create",
+                str(fabric_id),
+                f"name={name}",
+                f"vid={vid}",
+                f"description={description}",
+            ]
+        )
+        return vlan
+
+    def list_vlans(self, fabric_id: int) -> list[VLAN]:
+        vlans: list[VLAN] = self.execute(["vlans", "read", str(fabric_id)])
+        return vlans
+
+    def read_vlan(self, vlan: VLAN) -> VLAN:
+        vlan_obj: VLAN = self.execute(
+            ["vlan", "read", str(vlan["fabric_id"]), str(vlan["vid"])]
+        )
+        return vlan_obj
+
+    def update_vlan(self, vlan: VLAN, new_name: Optional[str] = None) -> VLAN:
+        cmd = ["vlan", "update", str(vlan["fabric_id"]), str(vlan["vid"])]
+        if new_name is not None:
+            cmd.append(f"name={new_name}")
+
+        vlan_obj: VLAN = self.execute(cmd)
+        return vlan_obj
+
+    def delete_vlan(self, vlan: VLAN) -> str:
+        try:
+            result: str = self.execute(
+                ["vlan", "delete", str(vlan["fabric_id"]), str(vlan["vid"])],
+                json_output=False,
+            )
+        except CalledProcessError as err:
+            if "cannot be deleted" in err.stdout:
+                raise CannotDeleteError(err.stdout)
+            else:
+                raise
+        else:
+            return result
+
+    def get_or_create_tag(self, name: str, description: str, definition: str) -> Tag:
+        try:
+            tag: Tag = self.execute(
+                [
+                    "tags",
+                    "create",
+                    f"name={name}",
+                    f"description={description}",
+                    f"definition={definition}",
+                ]
+            )
+        except CalledProcessError as err:
+            if "Name already exists" in err.stdout:
+                tag = next(tag for tag in self.list_tags() if tag["name"] == name)
+            else:
+                raise
+        return tag
+
+    def list_tags(self) -> list[Tag]:
+        tags: list[Tag] = self.execute(["tags", "read"])
+        return tags
+
+    def list_by_tag(self, name: str, resource: str) -> list[Machine]:
+        resources_list: list[Machine] = self.execute(["tag", resource, name])
+        return resources_list
+
+    def delete_tag(self, name: str) -> str:
+        result: str = self.execute(["tag", "delete", name], json_output=False)
+        return result
+
+    def get_config(self, key: str) -> str:
+        result: str = self.execute(
+            ["maas", "get-config", f"name={key}"], json_output=False
+        )
+        return result
+
+    def set_config(self, key: str, val: str) -> None:
+        self.execute(
+            ["maas", "set-config", f"name={key}", f"value={val}"], json_output=False
+        )
+
+    def read_machine_devices(self, machine: Machine) -> list[dict[str, Any]]:
+        result: list[dict[str, Any]] = self.execute(
+            ["node-devices", "read", machine["system_id"]]
+        )
+        return result
+
+    def read_power_parameters(self, machine: Machine) -> dict[str, str]:
+        result: dict[str, str] = self.execute(
+            ["machine", "power-parameters", machine["system_id"]]
+        )
+        return result
+
+    def query_power_state(self, machine: Machine) -> str:
+        result: PowerResponse = self.execute(
+            ["machine", "query-power-state", machine["system_id"]]
+        )
+        return result["state"]
+
+
+class QuietAuthenticatedAPIClient(AuthenticatedAPIClient):
+    """An Authenticated API Client that is quiet."""
+
+    @classmethod
+    def from_api_client(
+        cls, client: AuthenticatedAPIClient
+    ) -> QuietAuthenticatedAPIClient:
+        return cls(client.api_client, client.session)
+
+    def __repr__(self) -> str:
+        return f"<QuietAuthenticatedAPIClient for {self.api_client.url!r}>"
+
+    def _execute(self, cmd: list[str]) -> str:
+        __tracebackhide__ = True
+        return self.api_client.quietly_execute([self.session] + cmd)
diff --git a/systemtests/collect_sos_report/__init__.py b/systemtests/collect_sos_report/__init__.py
new file mode 100644
index 0000000..323d14b
--- /dev/null
+++ b/systemtests/collect_sos_report/__init__.py
@@ -0,0 +1 @@
+"""Collect an SOS report from the test run."""
diff --git a/systemtests/collect_sos_report/test_collect.py b/systemtests/collect_sos_report/test_collect.py
new file mode 100644
index 0000000..ff625da
--- /dev/null
+++ b/systemtests/collect_sos_report/test_collect.py
@@ -0,0 +1,12 @@
+import contextlib
+import subprocess
+from logging import getLogger
+
+from ..lxd import get_lxd
+
+
+def test_collect_sos_report(maas_container: str) -> None:
+    lxd = get_lxd(getLogger("collect_sos_report"))
+    assert lxd.container_exists(maas_container)
+    with contextlib.suppress(subprocess.CalledProcessError):
+        lxd.collect_sos_report(maas_container, ".")
diff --git a/systemtests/config.py b/systemtests/config.py
new file mode 100644
index 0000000..532e69e
--- /dev/null
+++ b/systemtests/config.py
@@ -0,0 +1,4 @@
+# MAAS user account
+ADMIN_USER = "admin"
+ADMIN_PASSWORD = "test"
+ADMIN_EMAIL = "maasadmin@xxxxxxxxxxx"
diff --git a/systemtests/conftest.py b/systemtests/conftest.py
new file mode 100644
index 0000000..6f1048f
--- /dev/null
+++ b/systemtests/conftest.py
@@ -0,0 +1,245 @@
+from __future__ import annotations
+
+from logging import getLogger
+from typing import TYPE_CHECKING, Any, Iterator
+
+import pytest
+from ruamel.yaml import YAML
+
+if TYPE_CHECKING:
+    from .api import AuthenticatedAPIClient
+    from typing import Set
+
+from .device_config import HardwareSyncMachine
+from .fixtures import (
+    build_container,
+    logstream,
+    maas_api_client,
+    maas_client_container,
+    maas_container,
+    maas_credentials,
+    maas_deb_repo,
+    maas_region,
+    pool,
+    skip_if_installed_from_deb_package,
+    skip_if_installed_from_snap,
+    ssh_key,
+    tag_all,
+    testlog,
+    unauthenticated_maas_api_client,
+    vault,
+    zone,
+)
+from .lxd import get_lxd
+from .machine_config import MachineConfig
+from .state import (
+    authenticated_admin,
+    configured_maas,
+    import_images_and_wait_until_synced,
+    maas_without_machine,
+    ready_maas,
+    ready_remote_maas,
+)
+from .utils import wait_for_machine
+
+if TYPE_CHECKING:
+    from _pytest.config.argparsing import Parser
+    from _pytest.python import Metafunc
+    from pluggy.callers import _Result  # type: ignore
+
+__all__ = [
+    "authenticated_admin",
+    "build_container",
+    "configured_maas",
+    "import_images_and_wait_until_synced",
+    "logstream",
+    "maas_api_client",
+    "maas_deb_repo",
+    "maas_client_container",
+    "maas_container",
+    "maas_credentials",
+    "maas_region",
+    "maas_without_machine",
+    "pool",
+    "ready_maas",
+    "ready_remote_maas",
+    "ssh_key",
+    "skip_if_installed_from_deb_package",
+    "skip_if_installed_from_snap",
+    "tag_all",
+    "testlog",
+    "unauthenticated_maas_api_client",
+    "vault",
+    "zone",
+]
+
+STATUS_READY = 4
+
+LOG = getLogger("systemtests.conftest")
+
+yaml = YAML()
+
+
+def pytest_addoption(parser: Parser) -> None:
+    def config_type(filename: str) -> dict[str, Any]:
+        with open(filename, "r") as fh:
+            config: dict[str, Any] = yaml.load(fh)
+        return config
+
+    parser.addoption(
+        "--ss-config", default=config_type("config.yaml"), type=config_type
+    )
+
+
+@pytest.fixture(scope="session")
+def config(request: pytest.FixtureRequest) -> dict[str, Any]:
+    config: dict[str, Any] = request.config.getoption("--ss-config")
+    return config
+
+
+def pytest_report_header(config: pytest.Config) -> list[str]:
+    headers = []
+    systemtests_config = config.getoption("--ss-config")
+    package_type = "snap" if "snap" in systemtests_config else "deb"
+    headers.append(f"packagetype: {package_type}")
+    machines = ", ".join(
+        machine_config.name
+        for machine_config in generate_machines_config(systemtests_config)
+    )
+    headers.append(f"machines: {machines}")
+    tls = "tls" in systemtests_config
+    if tls:
+        headers.append("tlsenabled: true")
+    return headers
+
+
+@pytest.hookimpl(tryfirst=True, hookwrapper=True)  # type: ignore
+def pytest_runtest_makereport(item: Any, call: Any) -> Iterator[Any]:
+    # execute all other hooks to obtain the report object
+    del call
+    outcome: _Result
+    outcome = yield
+    rep = outcome.get_result()
+
+    # we only look at actual failing test calls, not setup/teardown
+    if rep.when == "call" and rep.failed:
+        testlog = item.funcargs["logstream"]
+        rep.sections.append(("Test logging", testlog.getvalue()))
+
+
+@pytest.fixture(scope="module")
+def machine_config(
+    authenticated_admin: AuthenticatedAPIClient, request: Any
+) -> Iterator[MachineConfig]:
+    machine_config = request.param
+    yield machine_config
+
+
+# TODO: Figure out how we cache the results of this
+def generate_machines_config(config: dict[str, Any]) -> list[MachineConfig]:
+    machines_config: list[MachineConfig] = []
+
+    machines = config.get("machines", {})
+    hardware = machines.get("hardware", {})
+    for name, config_details in hardware.items():
+        machines_config.append(MachineConfig.from_config(name, config_details))
+
+    vms = machines.get("vms", {})
+    if vms:
+        if vms["power_type"] != "lxd":
+            raise ValueError(
+                f'{vms["power_type"]}.  LXD is the only VM manager allowed.'
+            )
+        vms_config = [
+            MachineConfig.from_config(
+                name,
+                dict(
+                    power_type=vms["power_type"],
+                    power_parameters=vms["power_parameters"],
+                    **config_details,
+                ),
+                extras=[("instance_name", name)],
+            )
+            for name, config_details in vms["instances"].items()
+        ]
+        machines_config.extend(vms_config)
+    return machines_config
+
+
+@pytest.fixture(scope="module")
+def hardware_sync_machine(
+    authenticated_admin: AuthenticatedAPIClient, request: Any
+) -> Iterator[HardwareSyncMachine]:
+    machine_config = request.param
+    machine = authenticated_admin.list_machines(mac_address=machine_config.mac_address)[
+        0
+    ]
+    assert machine["status"] == STATUS_READY
+    yield HardwareSyncMachine(
+        name=machine_config.name,
+        machine=machine,
+        devices_config=machine_config.devices_config,
+    )
+    if machine_config.power_type == "lxd":
+        lxd = get_lxd(LOG)
+        current_devices = lxd.list_instance_devices(machine_config.name)
+        for additional_device in machine_config.devices_config:
+            if additional_device["device_name"] in current_devices:
+                lxd.remove_instance_device(
+                    machine_config.name, additional_device["device_name"]
+                )
+    authenticated_admin.release_machine(machine)
+    wait_for_machine(
+        authenticated_admin,
+        machine,
+        status="Ready",
+        abort_status="Releasing failed",
+        machine_id=machine_config.name,
+        timeout=40 * 60,
+        delay=10,
+    )
+
+
+@pytest.fixture(scope="module")
+def instance_config(
+    authenticated_admin: AuthenticatedAPIClient, request: Any
+) -> Iterator[MachineConfig]:
+    instance_config = request.param
+    yield instance_config
+
+
+@pytest.fixture(scope="session")
+def needed_architectures(config: dict[str, Any]) -> Set[str]:
+    machines_config = generate_machines_config(config)
+    return {machine_config.architecture for machine_config in machines_config}
+
+
+def pytest_generate_tests(metafunc: Metafunc) -> None:
+    cfg = metafunc.config.getoption("--ss-config")
+
+    if "machine_config" in metafunc.fixturenames:
+        machines_config = [
+            machine_config
+            for machine_config in generate_machines_config(cfg)
+            if not machine_config.devices_config
+        ]
+        metafunc.parametrize("machine_config", machines_config, ids=str, indirect=True)
+
+    if "hardware_sync_machine" in metafunc.fixturenames:
+        machines_config = [
+            machine_config
+            for machine_config in generate_machines_config(cfg)
+            if machine_config.devices_config
+        ]
+        metafunc.parametrize(
+            "hardware_sync_machine", machines_config, ids=str, indirect=True
+        )
+
+    # machine_config, but specifically instances for hardware sync tests
+    if "instance_config" in metafunc.fixturenames:
+        instance_config = [
+            instance
+            for instance in generate_machines_config(cfg)
+            if len(instance.devices_config) > 0
+        ]
+        metafunc.parametrize("instance_config", instance_config, ids=str, indirect=True)
diff --git a/systemtests/device_config.py b/systemtests/device_config.py
new file mode 100644
index 0000000..550c7ad
--- /dev/null
+++ b/systemtests/device_config.py
@@ -0,0 +1,26 @@
+from __future__ import annotations
+
+from dataclasses import dataclass
+from typing import Dict
+
+from .api import Machine
+
+OPTIONS_EXCLUDE = ["device_name", "type"]
+
+
+def fmt_lxd_options(cfg: DeviceConfig) -> list[str]:
+    return [
+        f"{k}={v}"
+        for k, v in cfg.items()
+        if v is not None and v != "" and k not in OPTIONS_EXCLUDE
+    ]
+
+
+DeviceConfig = Dict[str, str]
+
+
+@dataclass
+class HardwareSyncMachine:
+    name: str
+    machine: Machine
+    devices_config: tuple[DeviceConfig, ...]
diff --git a/systemtests/env_builder/__init__.py b/systemtests/env_builder/__init__.py
new file mode 100644
index 0000000..dfd11c8
--- /dev/null
+++ b/systemtests/env_builder/__init__.py
@@ -0,0 +1,4 @@
+"""
+Prepares a container with a running MAAS ready to run the tests,
+and writes a credentials.yaml to allow a MAAS client to use it.
+"""
diff --git a/systemtests/env_builder/test_admin.py b/systemtests/env_builder/test_admin.py
new file mode 100644
index 0000000..66f1e76
--- /dev/null
+++ b/systemtests/env_builder/test_admin.py
@@ -0,0 +1,76 @@
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Any
+
+import pytest
+
+if TYPE_CHECKING:
+    from systemtests.api import AuthenticatedAPIClient
+    from systemtests.region import MAASRegion
+
+
+@pytest.mark.usefixtures("configured_maas")
+class TestAdmin:
+    def test_set_main_archive(self, maas_api_client: AuthenticatedAPIClient) -> None:
+        old_archive = maas_api_client.execute(
+            ["maas", "get-config", "name=main_archive"]
+        )
+        new_archive = "http://ubuntu.example.com";
+        result = maas_api_client.execute(
+            ["maas", "set-config", "name=main_archive", "value=" + new_archive],
+            json_output=False,
+        )
+        assert "OK" in result
+        result = maas_api_client.execute(["maas", "get-config", "name=main_archive"])
+        assert result == new_archive
+
+        maas_api_client.execute(
+            ["maas", "set-config", "name=main_archive", "value=" + old_archive],
+            json_output=False,
+        )
+
+
+@pytest.mark.usefixtures("configured_maas")
+class TestBaseConfiguration:
+    def test_set_http_proxy_and_use_peer_proxy(
+        self,
+        maas_api_client: AuthenticatedAPIClient,
+        maas_region: MAASRegion,
+        config: dict[str, Any],
+    ) -> None:
+        proxy_settings = maas_region.get_proxy_settings(config)
+        for key, value in proxy_settings.items():
+            result = maas_api_client.execute(["maas", "get-config", "name=" + key])
+            assert result == value
+
+    # XXX This replaces test_create_subnet, test_list_subnets and
+    # test_delete_subnets
+    def test_subnets(self, maas_api_client: AuthenticatedAPIClient) -> None:
+        maas_api_client.create_subnet("test-subnet", "1.2.3.0/24")
+        subnets = maas_api_client.list_subnets()
+        assert "test-subnet" in [subnet["name"] for subnet in subnets]
+        [created_subnet] = [
+            subnet for subnet in subnets if subnet["name"] == "test-subnet"
+        ]
+        assert created_subnet["cidr"] == "1.2.3.0/24"
+        # XXX Insert tests for
+        # test_use_internal_dns_for_proxy_on_managed_dhcp
+        maas_api_client.delete_subnet("test-subnet")
+        subnets = maas_api_client.list_subnets()
+        assert "test-subnet" not in [subnet["name"] for subnet in subnets]
+
+    def test_set_up_dhcp_vlan(
+        self,
+        configured_maas: MAASRegion,
+        maas_api_client: AuthenticatedAPIClient,
+        config: dict[str, Any],
+    ) -> None:
+        configured_maas.disable_dhcp(config, maas_api_client)
+        assert not configured_maas.is_dhcp_enabled()
+
+        configured_maas.enable_dhcp(config, maas_api_client)
+        assert configured_maas.is_dhcp_enabled()
+
+
+def test_wait_until_images_are_ready(import_images_and_wait_until_synced: None) -> None:
+    assert True
diff --git a/systemtests/env_builder/test_basic.py b/systemtests/env_builder/test_basic.py
new file mode 100644
index 0000000..f346095
--- /dev/null
+++ b/systemtests/env_builder/test_basic.py
@@ -0,0 +1,180 @@
+from __future__ import annotations
+
+from contextlib import closing
+from subprocess import CalledProcessError
+from typing import TYPE_CHECKING
+from urllib.error import HTTPError, URLError
+from urllib.request import urlopen
+
+import pytest
+from retry import retry
+
+from systemtests.lxd import get_lxd
+from systemtests.utils import (
+    randomstring,
+    retries,
+    wait_for_machine,
+    wait_for_new_machine,
+)
+
+if TYPE_CHECKING:
+    from logging import Logger
+
+    from systemtests.api import AuthenticatedAPIClient, UnauthenticatedMAASAPIClient
+    from systemtests.machine_config import MachineConfig
+    from systemtests.region import MAASRegion
+
+
+class TestSetup:
+    @pytest.mark.skip_if_installed_from_snap("Prometheus is installed in the snap")
+    def test_setup_prometheus(
+        self, maas_region: MAASRegion, config: dict[str, str]
+    ) -> None:
+        # Any of these could raise CalledProcessError if they fail
+        maas_region.execute(["apt", "install", "python3-prometheus-client", "-y"])
+        # restart MAAS so that the library is loaded
+        if "snap" not in config:
+            maas_region.execute(["systemctl", "restart", "maas-rackd"])
+        maas_region.restart()
+
+    def test_create_admin(
+        self,
+        maas_region: MAASRegion,
+        unauthenticated_maas_api_client: UnauthenticatedMAASAPIClient,
+    ) -> None:
+        """Run maas createsuperuser."""
+        name = randomstring()
+        password = randomstring()
+        email = f"{randomstring()}@example.com"
+
+        assert not maas_region.user_exists(name)
+        maas_region.create_user(name, password, email)
+        assert maas_region.user_exists(name)
+
+        token = maas_region.get_api_token(name)
+        output, _ = unauthenticated_maas_api_client.log_in(name, token)
+
+        assert "You are now logged in to the MAAS server" in output
+
+    def test_update_maas_url(self, maas_region: MAASRegion) -> None:
+        maas_region.set_config("maas-url", maas_region.http_url)
+        # XXX: Make turning on DEBUG optional
+        maas_region.enable_debug()
+
+        maas_region.restart()
+
+    @pytest.mark.skip_if_installed_from_snap("snap is config with mode: region+rack")
+    def test_update_maas_url_rack(self, maas_region: MAASRegion) -> None:
+        maas_region.execute(["maas-rack", "config", "--region-url", maas_region.url])
+        # XXX: Make turning on DEBUG optional
+        maas_region.execute(["maas-rack", "config", "--debug", "True"])
+
+        # Restart rackd.
+        maas_region.execute(["systemctl", "restart", "maas-rackd"])
+
+    def test_check_rpc_info(self, maas_region: MAASRegion, testlog: Logger) -> None:
+        """
+        Ensure that the region is publishing RPC info to the clusters. This
+        is a reasonable indication that the region is running correctly.
+        """
+        url = "%s/rpc/" % maas_region.http_url.rstrip("/")
+        testlog.info("RPC URL: " + url)
+        details = ""
+        for retry_info in retries(timeout=300, delay=10):
+            try:
+                with closing(urlopen(url)) as response:
+                    details = response.read().decode("utf-8")
+            except HTTPError as error:
+                # The only errors that are allowed here are 502 and
+                # 503, which is from the HTTP service still starting
+                # the WSGI threads. Any other error is a bug and will
+                # be raised.
+                if error.code in {502, 503}:
+                    testlog.info(f"RPC call returned {error.code}")
+                else:
+                    testlog.info("RPC HTTPError: " + str(error))
+                    raise
+            except URLError as error:
+                # urllib wraps connection errors in URLError. If we see it
+                # we are assuming MAAS hasn't yet started up and continue
+                # to wait.
+                testlog.info("RPC URLError: " + str(error))
+            else:
+                testlog.info("RPC response: " + details)
+                break
+        else:
+            raise AssertionError(
+                f"RPC info not published after {retry_info.attempt} attempts"
+                f" over {retry_info.elapsed} seconds."
+            )
+        assert response.status == 200
+
+    def test_ensure_ready_vm_for_hardware_sync(
+        self,
+        instance_config: MachineConfig,
+        maas_api_client: AuthenticatedAPIClient,
+        testlog: Logger,
+    ) -> None:
+        """Ensure that we have a Ready VM at the end."""
+        lxd = get_lxd(logger=testlog)
+        instances = lxd.list_instances()
+        vm_name = instance_config.name
+        # Force delete the VM so we know we're starting clean
+        if vm_name in instances:
+            lxd.delete(vm_name)
+
+        # Need to create a network device with a hwaddr
+        config: dict[str, str] = {"security.secureboot": "false"}
+        if instance_config.lxd_profile:
+            config["profile"] = instance_config.lxd_profile
+        if instance_config.mac_address:
+            config["volatile.eth0.hwaddr"] = instance_config.mac_address
+
+        lxd.create_vm(vm_name, config)
+
+        mac_address = instance_config.mac_address
+
+        # Find the VM in MAAS by MAC
+        maybe_machine = maas_api_client.list_machines(mac_address=mac_address)
+        if maybe_machine:
+            # Yay, it exists
+            machine = maybe_machine[0]
+        else:
+            # Machine not registered, let's boot it up
+            @retry(tries=5, delay=5, backoff=1.2, logger=testlog)
+            def _boot_vm(vm_name: str) -> None:
+                vm_details = lxd.list_instances()[vm_name]
+                if vm_details["status"] == "Running":
+                    lxd.restart(vm_name)
+                elif vm_details["status"] == "Stopped":
+                    try:
+                        lxd.start(vm_name)
+                    except CalledProcessError:
+                        lxd._run(["lxc", "info", "--show-log", vm_name])
+                        raise
+                else:
+                    assert (
+                        False
+                    ), f"Don't know how to handle lxd_vm status: {vm_details['status']}"
+
+            _boot_vm(vm_name)
+            machine = wait_for_new_machine(maas_api_client, mac_address, vm_name)
+
+        # Make sure we have power parameters set
+        if not machine["power_type"]:
+            machine = maas_api_client.update_machine(
+                machine, **instance_config.machine_update_power_parameters
+            )
+
+        if machine["status_name"] == "New":
+            maas_api_client.commission_machine(machine)
+
+            machine = wait_for_machine(
+                maas_api_client,
+                machine,
+                status="Ready",
+                abort_status="Failed commissioning",
+                machine_id=vm_name,
+                timeout=20 * 60,
+            )
+        assert machine["status_name"] == "Ready"
diff --git a/systemtests/fixtures.py b/systemtests/fixtures.py
new file mode 100644
index 0000000..63e9baf
--- /dev/null
+++ b/systemtests/fixtures.py
@@ -0,0 +1,737 @@
+from __future__ import annotations
+
+import io
+import os
+from logging import StreamHandler, getLogger
+from textwrap import dedent
+from typing import TYPE_CHECKING, Any, Iterator, Optional, TextIO
+
+import paramiko
+import pytest
+import yaml
+from pytest_steps import one_fixture_per_step
+
+from .api import MAAS_CONTAINER_CERTS_PATH, UnauthenticatedMAASAPIClient
+from .config import ADMIN_EMAIL, ADMIN_PASSWORD, ADMIN_USER
+from .lxd import CLILXD, get_lxd
+from .region import MAASRegion
+from .vault import Vault
+
+if TYPE_CHECKING:
+    from logging import Logger
+
+    from .api import AuthenticatedAPIClient
+LOG_NAME = "systemtests.fixtures"
+
+LXD_PROFILE = os.environ.get("MAAS_SYSTEMTESTS_LXD_PROFILE", "prof-maas-lab")
+
+
+def _add_maas_ppa(lxd: CLILXD, container: str, config: dict[str, Any]) -> None:
+    """Add MAAS PPA to the given container."""
+    MAAS_PPA = config.get("deb", {}).get("ppa", ["ppa:maas-committers/latest-deps"])
+    for ppa in MAAS_PPA:
+        lxd.execute(
+            container,
+            ["add-apt-repository", "-y", ppa],
+            environment={"DEBIAN_FRONTEND": "noninteractive"},
+        )
+
+
+@pytest.fixture(scope="session")
+def build_container(config: dict[str, Any]) -> Iterator[str]:
+    """Set up a new LXD container with postgres installed."""
+    log = getLogger(f"{LOG_NAME}.build_container")
+    lxd = get_lxd(log)
+    container_name = os.environ.get(
+        "MAAS_SYSTEMTESTS_BUILD_CONTAINER", "maas-system-build"
+    )
+    if lxd.container_exists(container_name):
+        container = container_name
+    else:
+        cloud_config = {}
+
+        http_proxy = config.get("proxy", {}).get("http", "")
+        if http_proxy:
+            cloud_config["apt"] = {
+                "proxy": http_proxy,
+                "http_proxy": http_proxy,
+                "https_proxy": http_proxy,
+            }
+
+        user_data = "#cloud-config\n" + yaml.dump(cloud_config, default_style="|")
+        container = lxd.create_container(
+            container_name,
+            config["containers-image"],
+            user_data=user_data,
+            profile=LXD_PROFILE,
+        )
+
+    yield container
+
+
+@pytest.fixture(scope="session")
+def maas_deb_repo(
+    build_container: str, config: dict[str, Any]
+) -> Iterator[Optional[str]]:
+    """Build maas deb, and setup APT repo."""
+    if "snap" in config:
+        yield None
+    else:
+        lxd = get_lxd(getLogger(f"{LOG_NAME}.maas_deb_repo"))
+        build_ip = lxd.get_ip_address(build_container)
+        http_proxy = config.get("proxy", {}).get("http", "")
+        proxy_env: Optional[dict[str, str]]
+        if http_proxy:
+            proxy_env = {
+                "http_proxy": http_proxy,
+                "https_proxy": http_proxy,
+                "no_proxy": build_ip,
+            }
+        else:
+            proxy_env = None
+
+        if not lxd.file_exists(build_container, "/var/www/html/repo/Packages.gz"):
+            _add_maas_ppa(lxd, build_container, config)
+            lxd.execute(
+                build_container,
+                [
+                    "apt",
+                    "install",
+                    "--yes",
+                    "git",
+                    "make",
+                    "devscripts",
+                    "equivs",
+                    "nginx",
+                ],
+                environment={"DEBIAN_FRONTEND": "noninteractive"},
+            )
+            maas_git_repo = config.get("deb", {}).get(
+                "git_repo", "https://git.launchpad.net/maas";
+            )
+            maas_git_branch = str(config.get("deb", {}).get("git_branch", "master"))
+            lxd.execute(
+                build_container,
+                [
+                    "git",
+                    "clone",
+                    "--single-branch",
+                    "--branch",
+                    maas_git_branch,
+                    "--depth",
+                    "100",
+                    "--recurse-submodules",
+                    maas_git_repo,
+                    "maas",
+                ],
+                environment=proxy_env,
+            )
+            lxd.execute(
+                build_container, ["git", "-C", "maas", "checkout", maas_git_branch]
+            )
+            lxd.execute(
+                build_container,
+                [
+                    "mk-build-deps",
+                    "--install",
+                    "--remove",
+                    "--tool",
+                    "apt-get --no-install-recommends -y",
+                    "maas/debian/control",
+                ],
+                environment={"DEBIAN_FRONTEND": "noninteractive"},
+            )
+
+            lxd.execute(
+                build_container,
+                ["make", "-C", "maas", "package"],
+                environment=proxy_env,
+            )
+            lxd.execute(
+                build_container,
+                [
+                    "sh",
+                    "-c",
+                    "cd build-area && (dpkg-scanpackages . | gzip -c > Packages.gz)",
+                ],
+            )
+            lxd.execute(build_container, ["mv", "build-area", "/var/www/html/repo"])
+        yield f"http://{build_ip}/repo";
+
+
+def get_user_data(
+    devices: dict[str, dict[str, Any]], cloud_config: Optional[dict[str, Any]] = None
+) -> str:
+    """Get cloud config user data to setup MAAS tests."""
+    ethernets = {}
+    for name, device in sorted(devices.items()):
+        network = device["network"]
+        cidr_suffix = network["cidr"].split("/")[-1]
+        ethernet = {"addresses": [device["ip"] + "/" + cidr_suffix]}
+        ethernets[name] = ethernet
+    netplan = {"network": {"version": 2, "ethernets": ethernets}}
+    if cloud_config is None:
+        cloud_config = {}
+
+    cloud_config.setdefault("runcmd", []).append(["netplan", "apply"])
+    cloud_config.setdefault("write_files", []).append(
+        {
+            "path": "/etc/netplan/99-maas-systemtests.yaml",
+            "content": yaml.dump(netplan),
+        },
+    )
+
+    user_data: str = "#cloud-config\n" + yaml.dump(cloud_config, default_style="|")
+    return user_data
+
+
+@pytest.fixture(scope="session")
+def maas_container(config: dict[str, Any], build_container: str) -> str:
+    """Build a container for running MAAS in."""
+    lxd = get_lxd(getLogger(f"{LOG_NAME}.maas_container"))
+    container_name = os.environ.get(
+        "MAAS_SYSTEMTESTS_MAAS_CONTAINER", "maas-system-maas"
+    )
+    if lxd.container_exists(container_name):
+        container = container_name
+    else:
+        if not lxd.profile_exists(container_name):
+            lxd.copy_profile(LXD_PROFILE, container_name)
+        existing_maas_nics = [
+            device_name
+            for device_name in lxd.list_profile_devices(container_name)
+            if device_name.startswith("maas-ss-")
+        ]
+        for device_name in existing_maas_nics:
+            lxd.remove_profile_device(container_name, device_name)
+        maas_networks = config["maas"]["networks"]
+        devices = {}
+        for network_name, ip in maas_networks.items():
+            network = config["networks"][network_name]
+            device_name = "maas-ss-" + network_name
+            lxd.add_profile_device(
+                container_name,
+                device_name,
+                "nic",
+                "name=" + device_name,
+                "parent=" + network["bridge"],
+                "nictype=bridged",
+            )
+            devices[device_name] = {"ip": ip, "network": network}
+        cloud_config: dict[str, Any] = {
+            "package_upgrade": True,
+            "packages": ["dhcp-probe"],
+            "write_files": [
+                {"content": "response_wait_time 10", "path": "/etc/dhcp_probe.cf"},
+            ],
+        }
+        if "snap" in config:
+            maas_channel = config["snap"]["maas_channel"]
+            test_db_channel = config["snap"].get("test_db_channel")
+            if test_db_channel is None:
+                test_db_channel = maas_channel
+
+            cloud_config["snap"] = {
+                "commands": [
+                    "snap refresh snapd",
+                    f"snap install maas-test-db --channel={test_db_channel}",
+                    f"snap install maas --channel={maas_channel}",
+                ]
+            }
+            http_proxy = config.get("proxy", {}).get("http", "")
+            if http_proxy:
+                snap_proxy_cmd = (
+                    f'snap set system proxy.http="{http_proxy}"'
+                    f'proxy.https="{http_proxy}"'
+                )
+                cloud_config["snap"]["commands"].insert(0, snap_proxy_cmd)
+        user_data = get_user_data(devices, cloud_config=cloud_config)
+        container = lxd.create_container(
+            container_name,
+            config["containers-image"],
+            user_data=user_data,
+            profile=container_name,
+        )
+
+        if "snap" not in config:
+            build_container_ip = lxd.get_ip_address(build_container)
+            contents = dedent(
+                f"""\
+                Package: *
+                Pin: origin "{build_container_ip}"
+                Pin-Priority: 999
+                """
+            )
+            lxd.push_text_file(
+                container_name, contents, "/etc/apt/preferences.d/maas-build-pin-999"
+            )
+
+            http_proxy = config.get("proxy", {}).get("http", "")
+            if http_proxy:
+                contents = dedent(
+                    f"""\
+                    Acquire::http::Proxy "{http_proxy}";
+                    Acquire::https::Proxy "{http_proxy}";
+                    Acquire::http::Proxy::{build_container_ip} "DIRECT";
+                    Acquire::https::Proxy::{build_container_ip} "DIRECT";
+                    """
+                )
+                lxd.push_text_file(
+                    container_name, contents, "/etc/apt/apt.conf.d/80maas-system-test"
+                )
+
+    return container
+
+
+@pytest.fixture(scope="session")
+def vault(maas_container: str, config: dict[str, Any]) -> Optional[Vault]:
+    snap_channel = config.get("vault", {}).get("snap-channel")
+    if not snap_channel:
+        return None
+
+    lxd = get_lxd(getLogger(f"{LOG_NAME}.vault"))
+    lxd.execute(maas_container, ["apt", "install", "--yes", "ssl-cert"])
+    lxd.execute(
+        maas_container, ["snap", "install", "vault", f"--channel={snap_channel}"]
+    )
+    lxd.execute(
+        maas_container,
+        [
+            "cp",
+            "/etc/ssl/certs/ssl-cert-snakeoil.pem",
+            "/etc/ssl/private/ssl-cert-snakeoil.key",
+            "/var/snap/vault/common",
+        ],
+    )
+    vault_config = dedent(
+        """\
+        disable_mlock = true
+        ui = true
+
+        storage "file" {
+            path = "/var/snap/vault/common/data"
+        }
+
+        listener "tcp" {
+            address = "[::]:8200"
+            tls_key_file = "/var/snap/vault/common/ssl-cert-snakeoil.key"
+            tls_cert_file = "/var/snap/vault/common/ssl-cert-snakeoil.pem"
+        }
+        """
+    )
+    vault_config_file = "/var/snap/vault/common/vault.hcl"
+    lxd.push_text_file(maas_container, vault_config, vault_config_file)
+
+    systemd_unit = dedent(
+        f"""\
+        [Unit]
+        Description=Vault service for MAAS testing
+        Wants=network.target
+
+        [Service]
+        ExecStart=/snap/bin/vault server -config {vault_config_file}
+        Restart=on-failure
+        Type=simple
+
+        [Install]
+        WantedBy=multi-user.target
+        """
+    )
+    lxd.push_text_file(
+        maas_container, systemd_unit, "/etc/systemd/system/vault.service"
+    )
+    lxd.execute(maas_container, ["systemctl", "enable", "--now", "vault"])
+
+    vault = Vault(
+        container=maas_container,
+        secrets_mount="maas-secrets",
+        secrets_path="maas",
+        lxd=lxd,
+    )
+    vault.ensure_initialized()
+    vault.ensure_setup()
+    return vault
+
+
+@pytest.fixture(scope="session")
+def maas_region(
+    maas_container: str,
+    maas_deb_repo: Optional[str],
+    vault: Optional[Vault],
+    config: dict[str, Any],
+) -> Iterator[MAASRegion]:
+    """Install MAAS region controller in the container."""
+    lxd = get_lxd(getLogger(f"{LOG_NAME}.maas_region"))
+
+    region_ip = lxd.get_ip_address(maas_container)
+    installed_from_snap = "snap" in config
+
+    # setup self-signed certs, and add them to the trusted list
+    lxd.execute(maas_container, ["apt", "install", "--yes", "ssl-cert"])
+    lxd.execute(
+        maas_container,
+        [
+            "cp",
+            "-n",
+            "/etc/ssl/certs/ssl-cert-snakeoil.pem",
+            "/usr/share/ca-certificates/ssl-cert-snakeoil.crt",
+        ],
+    )
+    certs_list = lxd.get_file_contents(maas_container, "/etc/ca-certificates.conf")
+    if "ssl-cert-snakeoil.crt" not in certs_list:
+        certs_list += "ssl-cert-snakeoil.crt\n"
+        lxd.push_text_file(maas_container, certs_list, "/etc/ca-certificates.conf")
+        lxd.execute(maas_container, ["update-ca-certificates"])
+
+    if installed_from_snap:
+        maas_already_initialized = lxd.file_exists(
+            maas_container, "/var/snap/maas/common/snap_mode"
+        )
+        snap_list = lxd.execute(
+            maas_container, ["snap", "list", "maas"]
+        )  # just to record which version is running.
+        try:
+            version = snap_list.stdout.split("\n")[1].split()[1]
+        except IndexError:
+            version = ""
+        if not maas_already_initialized:
+            lxd.execute(
+                maas_container,
+                [
+                    "maas",
+                    "init",
+                    "region+rack",
+                    "--database-uri",
+                    "maas-test-db:///",
+                    "--maas-url",
+                    f"http://{region_ip}:5240/MAAS";,
+                ],
+            )
+    else:
+        lxd.push_text_file(
+            maas_container,
+            f"deb [trusted=yes] {maas_deb_repo} ./\n",
+            "/etc/apt/sources.list.d/maas.list",
+        )
+        _add_maas_ppa(lxd, maas_container, config)
+        lxd.execute(
+            maas_container,
+            ["apt", "update"],
+            environment={"DEBIAN_FRONTEND": "noninteractive"},
+        )
+        lxd.execute(
+            maas_container,
+            ["apt", "install", "--yes", "maas"],
+            environment={"DEBIAN_FRONTEND": "noninteractive"},
+        )
+        policy = lxd.execute(
+            maas_container,
+            ["apt-cache", "policy", "maas"],
+            environment={"DEBIAN_FRONTEND": "noninteractive"},
+        )  # just to record which version is running.
+        try:
+            version = policy.stdout.split("\n")[1].strip().split(" ")[1][2:]
+        except IndexError:
+            version = ""
+
+    if vault:
+        maas_vault_status = yaml.safe_load(
+            lxd.quietly_execute(
+                maas_container, ["maas", "config-vault", "status"]
+            ).stdout.strip()
+        )
+        if maas_vault_status["status"] == "disabled":
+            role_id, wrapped_token = vault.create_approle(maas_container)
+            lxd.execute(
+                maas_container,
+                [
+                    "maas",
+                    "config-vault",
+                    "configure",
+                    vault.addr,
+                    role_id,
+                    wrapped_token,
+                    vault.secrets_path,
+                    "--mount",
+                    vault.secrets_mount,
+                ],
+            )
+            lxd.execute(
+                maas_container,
+                [
+                    "maas",
+                    "config-vault",
+                    "migrate",
+                ],
+            )
+
+    if "tls" in config:
+        lxd.execute(
+            maas_container, ["sh", "-c", f"mkdir -p {MAAS_CONTAINER_CERTS_PATH}"]
+        )
+        lxd.execute(
+            maas_container,
+            [
+                "cp",
+                "-n",
+                "/etc/ssl/certs/ssl-cert-snakeoil.pem",
+                "/etc/ssl/private/ssl-cert-snakeoil.key",
+                MAAS_CONTAINER_CERTS_PATH,
+            ],
+        )
+        # We need the cert to add it as CA in client container.
+        lxd.pull_file(
+            maas_container,
+            "/etc/ssl/certs/ssl-cert-snakeoil.pem",
+            "ssl-cert-snakeoil.pem",
+        )
+        lxd.execute(
+            maas_container,
+            [
+                "maas",
+                "config-tls",
+                "enable",
+                f"{MAAS_CONTAINER_CERTS_PATH}ssl-cert-snakeoil.key",
+                f"{MAAS_CONTAINER_CERTS_PATH}ssl-cert-snakeoil.pem",
+                "--port",
+                "5443",
+                "--yes",
+            ],
+        )
+
+    # We never want to access the region via the system proxy
+    if "no_proxy" not in os.environ:
+        os.environ["no_proxy"] = region_ip
+    elif region_ip not in os.environ["no_proxy"]:
+        os.environ["no_proxy"] = f"{os.environ['no_proxy']},{region_ip}"
+
+    url = http_url = f"http://{region_ip}:5240/MAAS/";
+    region_host = region_ip
+    if "tls" in config:
+        region_host = lxd.quietly_execute(
+            maas_container, ["hostname", "-f"]
+        ).stdout.strip()
+        url = f"https://{region_host}:5443/MAAS/";
+
+    region = MAASRegion(
+        url=url,
+        http_url=http_url,
+        host=region_host,
+        maas_container=maas_container,
+        installed_from_snap=installed_from_snap,
+    )
+    if "tls" in config:
+        region.restart()
+
+    if not region.user_exists(ADMIN_USER):
+        region.create_user(ADMIN_USER, ADMIN_PASSWORD, ADMIN_EMAIL)
+    token = region.get_api_token(ADMIN_USER)
+
+    with open("credentials.yaml", "w") as fh:
+        credentials = {
+            "region_host": region.host,
+            "region_ip": region_ip,
+            "region_url": region.url,
+            "api_key": token,
+        }
+        if installed_from_snap:
+            credentials["snap_channel"] = config["snap"]["maas_channel"]
+        yaml.dump(credentials, fh)
+
+    with open("version_under_test", "w") as fh:
+        fh.write(f"{version}\n")
+
+    if o11y := config.get("o11y"):
+        AGENT_PATH = "/opt/agent/agent-linux-amd64"
+        if not lxd.file_exists(maas_container, AGENT_PATH):
+            lxd.execute(maas_container, ["sh", "-c", "mkdir -p /opt/agent/"])
+            lxd.push_file(
+                maas_container, o11y["grafana_agent_file_path"].strip(), AGENT_PATH
+            )
+            lxd.execute(maas_container, ["sh", "-c", f"chmod a+x {AGENT_PATH}"])
+            AGENT_MAAS_SAMPLE = "/usr/share/maas/grafana_agent/agent.yaml.example"
+            if installed_from_snap:
+                AGENT_MAAS_SAMPLE = f"/snap/maas/current{AGENT_MAAS_SAMPLE}"
+            lxd.execute(
+                maas_container,
+                ["sh", "-c", f"cp {AGENT_MAAS_SAMPLE} /opt/agent/agent.yml"],
+            )
+            o11y_ip = o11y["o11y_ip"]
+            # FIXME: Could we have an uniq identifier for each system-tests execution?
+            #        hostname could be the name of the jenkins job execution?
+            hostname = "maas-system-maas"
+            telemetry_run_cmd = dedent(
+                f"""
+            systemd-run -u telemetry \
+    -E HOSTNAME="{hostname}" \
+    -E AGENT_WAL_DIR="/var/lib/grafana-agent/wal" \
+    -E AGENT_POS_DIR="/var/lib/grafana-agent/positions" \
+    -E PROMETHEUS_REMOTE_WRITE_URL="http://{o11y_ip}:9090/api/v1/write"; \
+    -E LOKI_API_URL="http://{o11y_ip}:3100/loki/api/v1/push"; \
+    -E MAAS_LOGS="/var/snap/maas/common/log/" \
+    -E MAAS_IS_REGION="true" \
+    -E MAAS_IS_RACK="true" \
+    -E MAAS_AZ="default" \
+    {AGENT_PATH} \
+        -config.expand-env \
+        -config.file=/opt/agent/agent.yml
+            """
+            )
+            lxd.execute(maas_container, ["sh", "-c", telemetry_run_cmd])
+    yield region
+
+
+@pytest.fixture(scope="session")
+def unauthenticated_maas_api_client(
+    maas_credentials: dict[str, str],
+    maas_client_container: str,
+) -> UnauthenticatedMAASAPIClient:
+    """Get an UnauthenticatedMAASAPIClient for interacting with MAAS."""
+    return UnauthenticatedMAASAPIClient(
+        maas_credentials["region_url"],
+        maas_client_container,
+        get_lxd(getLogger()),
+    )
+
+
+@pytest.fixture(scope="session")
+def maas_credentials() -> dict[str, str]:
+    """Load credentials from credentials.yaml."""
+    with open("credentials.yaml", "r") as fh:
+        credentials: dict[str, str] = yaml.safe_load(fh)
+    return credentials
+
+
+@pytest.fixture()
+def maas_api_client(
+    authenticated_admin: AuthenticatedAPIClient, testlog: Logger
+) -> Iterator[AuthenticatedAPIClient]:
+    """Configure logging for the AuthenticatedAPIClient."""
+    authenticated_admin.logger = testlog
+    yield authenticated_admin
+    authenticated_admin.logger = getLogger()
+
+
+@pytest.fixture
+def logstream() -> TextIO:
+    """Somewhere to stuff the logs."""
+    return io.StringIO()
+
+
+@pytest.fixture(autouse=True)
+@one_fixture_per_step
+def testlog(request: Any, logstream: TextIO) -> Iterator[Logger]:
+    """Collect the logs for a given test."""
+    logger = getLogger(request.node.name)
+    handler = StreamHandler(logstream)
+    logger.addHandler(handler)
+    yield logger
+    logger.removeHandler(handler)
+    # For one_fixture_per_step
+    yield logger
+
+
+@pytest.fixture(autouse=True)
+def skip_if_installed_from_snap(request: Any, config: dict[str, Any]) -> None:
+    """Skip tests that are deb specific."""
+    marker = request.node.get_closest_marker("skip_if_installed_from_snap")
+    if marker:
+        reason = marker.args[0]
+        if "snap" in config:
+            pytest.skip(reason)
+
+
+@pytest.fixture(autouse=True)
+def skip_if_installed_from_deb_package(request: Any, config: dict[str, Any]) -> None:
+    """Skip tests that are snap specific."""
+    marker = request.node.get_closest_marker("skip_if_installed_from_deb_package")
+    if marker:
+        reason = marker.args[0]
+        if "snap" not in config:
+            pytest.skip(reason)
+
+
+@pytest.fixture(scope="session")
+def ssh_key(authenticated_admin: AuthenticatedAPIClient) -> Iterator[paramiko.PKey]:
+    """Generate an SSH key to access deployed machines."""
+    key = paramiko.RSAKey.generate(1024)
+    public_key = f"{key.get_name()} {key.get_base64()}"
+    maas_key = authenticated_admin.create_ssh_key(public_key)
+    yield key
+    authenticated_admin.delete_ssh_key(maas_key)
+
+
+@pytest.fixture(scope="session")
+def zone(authenticated_admin: AuthenticatedAPIClient) -> Iterator[str]:
+    """Get or create a testing Zone."""
+    NAME = "a_zone_for_tests"
+
+    authenticated_admin.get_or_create_zone(
+        name=NAME, description="A zone available during tests."
+    )
+    yield NAME
+
+
+@pytest.fixture(scope="session")
+def pool(authenticated_admin: AuthenticatedAPIClient) -> Iterator[str]:
+    """Get or create a testing Pool."""
+    NAME = "a_pool_for_tests"
+
+    authenticated_admin.get_or_create_pool(
+        name=NAME, description="A pool available during tests."
+    )
+    yield NAME
+
+
+@pytest.fixture(scope="session")
+def tag_all(authenticated_admin: AuthenticatedAPIClient) -> Iterator[str]:
+    """Get or create a testing Tag."""
+    NAME = "all"
+
+    authenticated_admin.get_or_create_tag(
+        name=NAME, description="A tag present on all nodes", definition="true()"
+    )
+
+    yield NAME
+
+
+@pytest.fixture(scope="session")
+def maas_client_container(
+    maas_credentials: dict[str, str], config: dict[str, Any]
+) -> Iterator[str]:
+    """Set up a new LXD container with maas installed (in order to use maas CLI)."""
+
+    log = getLogger(f"{LOG_NAME}.client_container")
+    lxd = get_lxd(log)
+    container = os.environ.get("MAAS_SYSTEMTESTS_CLIENT_CONTAINER", "maas-client")
+
+    lxd.get_or_create(container, config["containers-image"], profile=LXD_PROFILE)
+    snap_channel = maas_credentials.get("snap_channel", "latest/edge")
+    lxd.execute(container, ["snap", "refresh", "snapd"])
+    lxd.execute(container, ["snap", "install", "maas", f"--channel={snap_channel}"])
+    lxd.execute(container, ["snap", "list", "maas"])
+    ensure_host_ip_mapping(
+        lxd, container, maas_credentials["region_host"], maas_credentials["region_ip"]
+    )
+    if "tls" in config:
+        lxd.execute(container, ["sh", "-c", f"mkdir -p {MAAS_CONTAINER_CERTS_PATH}"])
+        lxd.push_file(
+            container,
+            config["tls"]["cacerts"],
+            f"{MAAS_CONTAINER_CERTS_PATH}cacerts.pem",
+        )
+
+    yield container
+
+
+def ensure_host_ip_mapping(lxd: CLILXD, container: str, hostname: str, ip: str) -> None:
+    """Ensure the /etc/hosts file contains the specified host/ip mapping."""
+    if hostname == ip:
+        # no need to add the alias
+        return
+    line = f"{ip} {hostname}\n"
+    content = lxd.get_file_contents(container, "/etc/hosts")
+    if line in content:
+        return
+    content += line
+    lxd.push_text_file(container, content, "/etc/hosts")
diff --git a/systemtests/general_tests/__init__.py b/systemtests/general_tests/__init__.py
new file mode 100644
index 0000000..7a045a3
--- /dev/null
+++ b/systemtests/general_tests/__init__.py
@@ -0,0 +1,4 @@
+"""
+Uses credentials.yaml info to access a running MAAS deployment,
+asserts the state is useful to these tests and executes them.
+"""
diff --git a/systemtests/general_tests/test_crud.py b/systemtests/general_tests/test_crud.py
new file mode 100644
index 0000000..47ab2ab
--- /dev/null
+++ b/systemtests/general_tests/test_crud.py
@@ -0,0 +1,183 @@
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Iterator
+
+import pytest
+from pytest_steps import test_steps
+
+from systemtests.api import CannotDeleteError
+
+if TYPE_CHECKING:
+    from ..api import AuthenticatedAPIClient
+
+
+@test_steps("create", "update", "delete")
+def test_zone(maas_api_client: AuthenticatedAPIClient) -> Iterator[None]:
+    zone = maas_api_client.get_or_create_zone(
+        name="test-zone", description="A zone created by system-tests"
+    )
+    assert zone["name"] == "test-zone"
+    assert zone["description"] == "A zone created by system-tests"
+
+    zones = maas_api_client.list_zones()
+    assert zone in zones
+    yield
+
+    updated = maas_api_client.update_zone(
+        zone_name="test-zone", new_name="updated", new_description=""
+    )
+
+    assert updated["name"] == "updated"
+    assert updated["description"] == ""
+
+    zones = maas_api_client.list_zones()
+
+    assert updated in zones
+    yield
+
+    maas_api_client.delete_zone("updated")
+    zones = maas_api_client.list_zones()
+    assert updated not in zones
+
+    with pytest.raises(CannotDeleteError):
+        maas_api_client.delete_zone("default")
+    yield
+
+
+@test_steps("create", "update", "delete")
+def test_resource_pool(maas_api_client: AuthenticatedAPIClient) -> Iterator[None]:
+    pool = maas_api_client.get_or_create_pool(
+        name="test-pool", description="A resource pool created by system-tests"
+    )
+    assert pool["name"] == "test-pool"
+    assert pool["description"] == "A resource pool created by system-tests"
+
+    pools = maas_api_client.list_pools()
+    assert pool in pools
+    yield
+
+    updated = maas_api_client.update_pool(
+        pool=pool, new_name="updated", new_description=""
+    )
+
+    assert updated["name"] == "updated"
+    assert updated["description"] == ""
+
+    pools = maas_api_client.list_pools()
+
+    assert updated in pools
+    yield
+
+    maas_api_client.delete_pool(pool)
+    pools = maas_api_client.list_pools()
+    assert updated not in pools
+
+    default_pool = next(pool for pool in pools if pool["name"] == "default")
+    with pytest.raises(CannotDeleteError):
+        maas_api_client.delete_pool(default_pool)
+    yield
+
+
+@test_steps("create", "update", "delete")
+def test_spaces(maas_api_client: AuthenticatedAPIClient) -> Iterator[None]:
+    space = maas_api_client.get_or_create_space(
+        name="test-space", description="A space created by system-tests"
+    )
+    assert space["name"] == "test-space"
+    # This fails because description is not returned by MAAS's space read :(
+    # assert space["description"] == "A space created by system-tests"
+
+    spaces = maas_api_client.list_spaces()
+    assert space in spaces
+    yield
+
+    updated = maas_api_client.update_space(space=space, new_name="updated")
+
+    assert updated["name"] == "updated"
+
+    spaces = maas_api_client.list_spaces()
+    assert updated in spaces
+    yield
+
+    maas_api_client.delete_space(space)
+
+    spaces = maas_api_client.list_spaces()
+    assert updated not in spaces
+
+    default_space = next(space for space in spaces if space["name"] == "undefined")
+    with pytest.raises(CannotDeleteError):
+        maas_api_client.delete_space(default_space)
+    yield
+
+
+@test_steps("create", "update", "delete")
+def test_fabrics(maas_api_client: AuthenticatedAPIClient) -> Iterator[None]:
+    NAME = "test-fabric"
+    DEFAULT = "fabric-0"
+    UPDATED = "updated"
+    DESCRIPTION = "A fabric created by system-tests"
+    fabric = maas_api_client.get_or_create_fabric(name=NAME, description=DESCRIPTION)
+    assert fabric["name"] == NAME
+    # This fails because description is not returned by MAAS's fabric read :(
+    # assert fabric["description"] == DESCRIPTION
+
+    fabrics = maas_api_client.list_fabrics()
+    assert fabric in fabrics
+    yield
+
+    updated = maas_api_client.update_fabric(fabric=fabric, new_name=UPDATED)
+
+    assert updated["name"] == UPDATED
+
+    fabrics = maas_api_client.list_fabrics()
+
+    assert updated in fabrics
+    yield
+
+    maas_api_client.delete_fabric(fabric)
+
+    fabrics = maas_api_client.list_fabrics()
+    assert updated not in fabrics
+
+    default_fabric = next(fabric for fabric in fabrics if fabric["name"] == DEFAULT)
+    with pytest.raises(CannotDeleteError):
+        maas_api_client.delete_fabric(default_fabric)
+    yield
+
+
+@test_steps("create", "update", "delete")
+def test_vlans(maas_api_client: AuthenticatedAPIClient) -> Iterator[None]:
+    NAME = "test-vlan"
+    DEFAULT = "untagged"
+    UPDATED = "updated"
+    FABRIC_ID = 0
+    VID = 1234
+    DESCRIPTION = "A vlan created by system-tests"
+    vlan = maas_api_client.create_vlan(
+        name=NAME, fabric_id=FABRIC_ID, vid=VID, description=DESCRIPTION
+    )
+    assert vlan["name"] == NAME
+    # This fails because description is not returned by MAAS's vlan read :(
+    # assert vlan["description"] == DESCRIPTION
+
+    vlans = maas_api_client.list_vlans(fabric_id=FABRIC_ID)
+    assert vlan in vlans
+    yield
+
+    updated = maas_api_client.update_vlan(vlan=vlan, new_name=UPDATED)
+
+    assert updated["name"] == UPDATED
+
+    vlans = maas_api_client.list_vlans(fabric_id=FABRIC_ID)
+    assert updated in vlans
+    yield
+
+    maas_api_client.delete_vlan(vlan)
+
+    vlans = maas_api_client.list_vlans(fabric_id=FABRIC_ID)
+    assert updated not in vlans
+
+    default_vlan = next(vlan for vlan in vlans if vlan["name"] == DEFAULT)
+    with pytest.raises(CannotDeleteError):
+        maas_api_client.delete_vlan(default_vlan)
+    yield
diff --git a/systemtests/general_tests/test_machine_config.py b/systemtests/general_tests/test_machine_config.py
new file mode 100644
index 0000000..254f3dd
--- /dev/null
+++ b/systemtests/general_tests/test_machine_config.py
@@ -0,0 +1,76 @@
+from ..machine_config import MachineConfig
+
+
+def test_for_maas_power() -> None:
+    mc = MachineConfig.from_config(
+        "test",
+        {
+            "power_type": "ipmi",
+            "power_parameters": {
+                "power_driver": "LAN_2_0",
+                "power_address": "10.245.143.120",
+                "power_username": "admin",
+                "power_password": "calvin",
+            },
+            "mac_address": "1c:1b:0d:0d:52:7c",
+        },
+        [("instance_name", "foo")],
+    )
+    assert mc.maas_power_cmd_power_parameters == [
+        "ipmi",
+        "--instance-name",
+        "foo",
+        "--power-address",
+        "10.245.143.120",
+        "--power-driver",
+        "LAN_2_0",
+        "--power-pass",
+        "calvin",
+        "--power-user",
+        "admin",
+        "--privilege-level",
+        "ADMIN",
+    ]
+
+
+def test_for_api() -> None:
+    mc = MachineConfig.from_config(
+        "test",
+        {
+            "power_type": "ipmi",
+            "power_parameters": {
+                "power_driver": "LAN_2_0",
+                "power_address": "10.245.143.120",
+                "power_username": "admin",
+                "power_password": "calvin",
+            },
+            "mac_address": "1c:1b:0d:0d:52:7c",
+        },
+        [("instance_name", "foo")],
+    )
+    assert mc.machine_update_power_parameters == {
+        "power_type": "ipmi",
+        "power_parameters_power_address": "10.245.143.120",
+        "power_parameters_power_driver": "LAN_2_0",
+        "power_parameters_power_password": "calvin",
+        "power_parameters_power_username": "admin",
+        "power_parameters_instance_name": "foo",
+    }
+
+
+def test_hashable() -> None:
+    mc = MachineConfig.from_config(
+        "test",
+        {
+            "power_type": "ipmi",
+            "power_parameters": {
+                "power_driver": "LAN_2_0",
+                "power_address": "10.245.143.120",
+                "power_username": "admin",
+                "power_password": "calvin",
+            },
+            "mac_address": "1c:1b:0d:0d:52:7c",
+        },
+        [("instance_name", "foo")],
+    )
+    assert hash(mc) is not None
diff --git a/systemtests/lxd.py b/systemtests/lxd.py
new file mode 100644
index 0000000..27d2df2
--- /dev/null
+++ b/systemtests/lxd.py
@@ -0,0 +1,424 @@
+from __future__ import annotations
+
+import json
+import os
+import subprocess
+import tempfile
+import textwrap
+from functools import partial
+from itertools import chain
+from pathlib import Path
+from typing import TYPE_CHECKING, Optional
+
+from retry import retry
+
+from .device_config import DeviceConfig, fmt_lxd_options
+from .subprocess import run_with_logging
+
+if TYPE_CHECKING:
+    import logging
+    from typing import Any, Callable
+
+CONTAINER_USERDATA = textwrap.dedent(
+    """
+    #cloud-config
+    package_upgrade: true
+    """
+)
+
+
+class CloudInitDisabled(Exception):
+    pass
+
+
+class BadWebSocketHandshakeError(Exception):
+    """Raised when lxc execute gives a bad websocket handshake error."""
+
+
+class CLILXD:
+    """Backend that uses the CLI to talk to LXD."""
+
+    def __init__(self, logger: logging.Logger):
+        self.logger = logger
+
+    def container_exists(self, name: str) -> bool:
+        try:
+            self._run(["lxc", "info", name])
+        except subprocess.CalledProcessError:
+            return False
+        else:
+            return True
+
+    def _run(
+        self,
+        cmd: list[str],
+        prefix: Optional[list[str]] = None,
+        logger: Optional[logging.Logger] = None,
+    ) -> subprocess.CompletedProcess[str]:
+        __tracebackhide__ = True
+        if logger is None:
+            logger = self.logger
+        return run_with_logging(cmd, logger, prefix=prefix)
+
+    def create_container(
+        self,
+        name: str,
+        image: str,
+        user_data: Optional[str] = None,
+        profile: Optional[str] = None,
+    ) -> str:
+        if not self.container_exists(name):
+            self.logger.info(f"Creating container {name} (from {image})...")
+            cmd = [
+                "lxc",
+                "launch",
+                image,
+                "-e",
+            ]
+            if user_data is not None:
+                cmd.extend(["-c", f"user.user-data={user_data}"])
+            if profile is not None:
+                cmd.extend(["-p", profile])
+            cmd.append(name)
+            self._run(cmd)
+        self.logger.info(f"Container {name} created.")
+        self.logger.info("Waiting for boot to finish...")
+
+        @retry(exceptions=CloudInitDisabled, tries=120, delay=1, logger=self.logger)
+        def _cloud_init_wait() -> None:
+            process = self.execute(
+                name, ["timeout", "2000", "cloud-init", "status", "--wait", "--long"]
+            )
+            if "Cloud-init disabled by cloud-init-generator" in process.stdout:
+                raise CloudInitDisabled("Cloud-init is disabled.")
+            process = self.execute(
+                name, ["timeout", "2000", "snap", "wait", "system", "seed.loaded"]
+            )
+
+        _cloud_init_wait()
+        self.logger.info("Boot finished.")
+        return name
+
+    def get_or_create(
+        self,
+        name: str,
+        image: str,
+        user_data: Optional[str] = None,
+        profile: Optional[str] = None,
+    ) -> str:
+        if not self.container_exists(name):
+            self.create_container(name, image, user_data=user_data, profile=profile)
+        return name
+
+    def push_file(
+        self,
+        container: str,
+        source_file: str,
+        target_file: str,
+        uid: int = 0,
+        gid: int = 0,
+    ) -> None:
+        self._run(
+            [
+                "lxc",
+                "file",
+                "--quiet",
+                "push",
+                "--uid",
+                str(uid),
+                "--gid",
+                str(gid),
+                source_file,
+                f"{container}{target_file}",
+            ],
+        )
+
+    def push_text_file(
+        self,
+        container: str,
+        content: str,
+        target_file: str,
+        uid: int = 0,
+        gid: int = 0,
+    ) -> None:
+        with tempfile.NamedTemporaryFile() as source_file:
+            source_file.write(content.encode())
+            source_file.seek(0)
+            self.push_file(
+                container,
+                source_file.name,
+                target_file,
+                uid=uid,
+                gid=gid,
+            )
+
+    def file_exists(self, container: str, file_path: str) -> bool:
+        try:
+            self.quietly_execute(container, ["stat", file_path])
+        except subprocess.CalledProcessError:
+            return False
+        else:
+            return True
+
+    def pull_file(
+        self, container: str, file_path: str, local_path: str
+    ) -> subprocess.CompletedProcess[str]:
+        return self._run(
+            [
+                "lxc",
+                "file",
+                "--quiet",
+                "pull",
+                "-r",
+                f"{container}/{file_path}",
+                local_path,
+            ],
+        )
+
+    def get_file_contents(self, container: str, file_path: str) -> str:
+        filename = os.path.basename(file_path)
+        with tempfile.TemporaryDirectory() as tempdir:
+            self._run(
+                [
+                    "lxc",
+                    "file",
+                    "--quiet",
+                    "pull",
+                    f"{container}{file_path}",
+                    f"{tempdir}/",
+                ],
+            )
+            with open(os.path.join(tempdir, filename), "r") as f:
+                return f.read()
+
+    def _run_with_logger(
+        self,
+        executor: Callable[[], subprocess.CompletedProcess[str]],
+        logger: Optional[logging.Logger],
+    ) -> subprocess.CompletedProcess[str]:
+        __tracebackhide__ = True
+
+        # Retry the run, excepting websocket: bad handshake errors
+        @retry(exceptions=BadWebSocketHandshakeError, tries=3, logger=logger)
+        def _retry_bad_handshake() -> subprocess.CompletedProcess[str]:
+            __tracebackhide__ = True
+            try:
+                result = executor()
+            except subprocess.CalledProcessError as e:
+                if e.stderr.strip().endswith("websocket: bad handshake"):
+                    raise BadWebSocketHandshakeError()
+                else:
+                    raise
+            else:
+                return result
+
+        return _retry_bad_handshake()
+
+    def _get_lxc_command(
+        self, container: str, environment: Optional[dict[str, str]]
+    ) -> list[str]:
+        lxc_command = ["lxc", "exec", "--force-noninteractive", container]
+        if environment is not None:
+            for key, value in environment.items():
+                lxc_command.extend(["--env", f"{key}={value}"])
+        lxc_command.append("--")
+        return lxc_command
+
+    def execute(
+        self,
+        container: str,
+        command: list[str],
+        environment: Optional[dict[str, str]] = None,
+    ) -> subprocess.CompletedProcess[str]:
+        __tracebackhide__ = True
+        logger = self.logger.getChild(container)
+        lxc_command = self._get_lxc_command(container, environment)
+
+        # Suppress logging of the lxc wrapper for clearer logs
+        executor = partial(self._run, command, prefix=lxc_command, logger=logger)
+        return self._run_with_logger(executor, logger)
+
+    def quietly_execute(
+        self,
+        container: str,
+        command: list[str],
+        environment: Optional[dict[str, str]] = None,
+    ) -> subprocess.CompletedProcess[str]:
+        """Execute a command without logging it."""
+        __tracebackhide__ = True
+        lxc_command = self._get_lxc_command(container, environment)
+
+        executor = partial(
+            subprocess.run,
+            lxc_command + command,
+            capture_output=True,
+            check=True,
+            encoding="utf-8",
+            errors="backslashreplace",
+        )
+        return self._run_with_logger(executor, None)
+
+    def delete(self, instance: str) -> None:
+        self._run(["lxc", "delete", "--force", instance])
+
+    def get_ip_address(self, container: str) -> str:
+        @retry(
+            exceptions=RuntimeError, tries=30, delay=2, backoff=1.1, logger=self.logger
+        )
+        def _get_ip_address() -> str:
+            result = self._run(["lxc", "list", "--format", "json", container])
+            # lxc list does partial match, so we still need to find the entry
+            for entry in json.loads(result.stdout):
+                if entry["name"] != container:
+                    continue
+                for address in entry["state"]["network"]["eth0"]["addresses"]:
+                    self.logger.info(f"Considering address: {address}")
+                    if address["family"] == "inet":
+                        ip: str = address["address"]
+                        return ip
+            else:
+                raise RuntimeError("Couldn't find an IP address")
+
+        return _get_ip_address()
+
+    def profile_exists(self, profile_name: str) -> bool:
+        try:
+            self._run(
+                ["lxc", "profile", "show", profile_name],
+            )
+        except subprocess.CalledProcessError:
+            return False
+        else:
+            return True
+
+    def copy_profile(self, base_name: str, target_name: str) -> None:
+        self._run(["lxc", "profile", "copy", base_name, target_name])
+
+    def list_profile_devices(self, profile_name: str) -> list[str]:
+        result = self._run(["lxc", "profile", "device", "list", profile_name])
+        return result.stdout.splitlines()
+
+    def add_profile_device(
+        self, profile_name: str, name: str, device_type: str, *device_params: str
+    ) -> None:
+        self._run(
+            ["lxc", "profile", "device", "add", profile_name, name, device_type]
+            + list(device_params),
+        )
+
+    def remove_profile_device(self, profile_name: str, device_name: str) -> None:
+        self._run(
+            ["lxc", "profile", "device", "remove", profile_name, device_name],
+        )
+
+    def collect_sos_report(self, container: str, output: str) -> None:
+        container_tmp = "/tmp/sosreport"
+        output = f"{output}/sosreport"
+        self.execute(container, ["apt", "install", "--yes", "sosreport"])
+        self.execute(container, ["rm", "-rf", container_tmp])
+        self.execute(container, ["mkdir", "-p", container_tmp])
+        self.execute(
+            container,
+            ["sos", "report", "--batch", "-o", "maas", "--tmp-dir", container_tmp],
+        )
+        Path(output).mkdir(parents=True, exist_ok=True)
+        with tempfile.TemporaryDirectory(prefix="sosreport") as tempdir:
+            self.pull_file(container, container_tmp, f"{tempdir}/")
+            for f in os.listdir(f"{tempdir}/sosreport"):
+                os.rename(os.path.join(f"{tempdir}/sosreport", f), f"{output}/{f}")
+
+    def list_instance_devices(self, instance_name: str) -> list[str]:
+        result = self._run(["lxc", "config", "device", "list", instance_name])
+        return result.stdout.splitlines()
+
+    def add_instance_device(
+        self, instance_name: str, name: str, device_config: DeviceConfig
+    ) -> None:
+        self._run(
+            [
+                "lxc",
+                "config",
+                "device",
+                "add",
+                instance_name,
+                name,
+                device_config["type"],
+            ]
+            + fmt_lxd_options(device_config)
+        )
+
+    def remove_instance_device(self, instance_name: str, device_name: str) -> None:
+        """Remove a device from an instance."""
+
+        @retry(
+            exceptions=subprocess.CalledProcessError,
+            tries=5,
+            delay=0.5,
+            logger=self.logger,
+        )
+        def _remove_device() -> subprocess.CompletedProcess[str]:
+            return self._run(
+                ["lxc", "config", "device", "remove", instance_name, device_name]
+            )
+
+        _remove_device()
+
+    def list_instances(self) -> dict[str, dict[str, Any]]:
+        result = self._run(["lxc", "list", "-f", "json"])
+        lxc_list: list[dict[str, Any]] = json.loads(result.stdout)
+        instances = {instance["name"]: instance for instance in lxc_list}
+        return instances
+
+    def create_vm(self, instance_name: str, config: dict[str, str]) -> None:
+        args: list[str] = []
+        profile: Optional[str] = config.pop("profile", None)
+        if profile:
+            args += ["-p", profile]
+        args += list(chain.from_iterable(("-c", f"{k}={v}") for k, v in config.items()))
+        self._run(["lxc", "init", "--empty", "--vm", instance_name] + args)
+        self._run(
+            [
+                "lxc",
+                "config",
+                "device",
+                "override",
+                instance_name,
+                "eth0",
+                "boot.priority=10",
+            ]
+        )
+
+    def start(self, instance_name: str) -> subprocess.CompletedProcess[str]:
+        return self._run(["lxc", "start", instance_name])
+
+    def stop(
+        self, instance_name: str, force: bool = False
+    ) -> subprocess.CompletedProcess[str]:
+        argv = ["lxc", "stop", instance_name]
+        if force:
+            argv.append("--force")
+        return self._run(argv)
+
+    def is_running(self, instance_name: str) -> bool:
+        result = self._run(["lxc", "info", instance_name])
+        for line in result.stdout.splitlines():
+            key, value = line.split(": ", 1)
+            if key == "Status":
+                return value == "RUNNING"
+        return False
+
+    def restart(
+        self, instance_name: str, force: bool = False
+    ) -> subprocess.CompletedProcess[str]:
+        argv = ["lxc", "restart", instance_name]
+        if force:
+            argv.append("--force")
+        return self._run(argv)
+
+
+def get_lxd(logger: logging.Logger) -> CLILXD:
+    """Get the LXD backend to use.
+
+    By default, the CLI backend is returned.
+    """
+    return CLILXD(logger)
diff --git a/systemtests/machine_config.py b/systemtests/machine_config.py
new file mode 100644
index 0000000..50e0d90
--- /dev/null
+++ b/systemtests/machine_config.py
@@ -0,0 +1,97 @@
+from __future__ import annotations
+
+from dataclasses import dataclass, field
+from itertools import chain
+from typing import Dict, Mapping, Optional, Sequence, Union, cast
+
+from .device_config import DeviceConfig
+
+
+def frozen_power_parameters(
+    power_parameters: Mapping[str, str], extras: Sequence[tuple[str, str]]
+) -> Sequence[PowerParameter]:
+    return tuple(
+        PowerParameter(key, value)
+        for key, value in chain(power_parameters.items(), extras)
+    )
+
+
+@dataclass(order=True, frozen=True)
+class PowerParameter:
+    key: str
+    value: str
+
+    @property
+    def for_api(self) -> tuple[str, str]:
+        return (f"power_parameters_{self.key}", self.value)
+
+    @property
+    def for_maas_power(self) -> tuple[str, str]:
+        name_map = {"power_username": "power-user", "power_password": "power-pass"}
+        flag = name_map.get(self.key, self.key.replace("_", "-"))
+        return (f"--{flag}", self.value)
+
+
+def maas_power_cmd_power_parameters(
+    power_type: str, power_parameters: tuple[PowerParameter, ...]
+) -> list[str]:
+    ret = [power_type] + list(
+        chain.from_iterable(param.for_maas_power for param in power_parameters)
+    )
+    if power_type == "ipmi":
+        ret += ["--privilege-level", "ADMIN"]
+    return ret
+
+
+# MachineConfig must be immutable to be used in order to use pytest-steps,
+# so power_parameters must be a tuple.
+@dataclass(frozen=True)
+class MachineConfig:
+    name: str
+    power_type: str
+    power_parameters: tuple[PowerParameter, ...]
+    mac_address: str
+    devices_config: tuple[DeviceConfig, ...] = field(compare=False)
+    architecture: str = "amd64"
+    osystem: str = "ubuntu"
+    lxd_profile: Optional[str] = None
+
+    def __str__(self) -> str:
+        return f"{self.name}.{self.architecture}"
+
+    @classmethod
+    def from_config(
+        cls,
+        name: str,
+        config: dict[str, Union[str, dict[str, str]]],
+        extras: Sequence[tuple[str, str]] = (),
+    ) -> MachineConfig:
+        details = cast(Dict[str, str], config.copy())
+        frozen_params = frozen_power_parameters(
+            cast(Mapping[str, str], details.pop("power_parameters")), extras
+        )
+
+        devices_config: tuple[DeviceConfig, ...] = ()
+        if "devices" in details:
+            devices = cast(Dict[str, Dict[str, str]], details.pop("devices"))
+            for device_name, device_config in devices.items():
+                device_config["device_name"] = device_name
+                devices_config += (device_config,)
+
+        return cls(
+            name=name,
+            power_parameters=tuple(sorted(frozen_params)),
+            devices_config=devices_config,
+            **details,
+        )
+
+    @property
+    def maas_power_cmd_power_parameters(self) -> list[str]:
+        return maas_power_cmd_power_parameters(self.power_type, self.power_parameters)
+
+    @property
+    def machine_update_power_parameters(self) -> dict[str, str]:
+        return dict(
+            [param.for_api for param in self.power_parameters],
+            power_type=self.power_type,
+        )
diff --git a/systemtests/region.py b/systemtests/region.py
new file mode 100644
index 0000000..a6ef1bf
--- /dev/null
+++ b/systemtests/region.py
@@ -0,0 +1,207 @@
+from __future__ import annotations
+
+import subprocess
+from logging import getLogger
+from typing import TYPE_CHECKING, Any, Union
+
+from .lxd import get_lxd
+from .utils import retries
+
+if TYPE_CHECKING:
+    from .api import AuthenticatedAPIClient
+
+LOG = getLogger("systemtests.region")
+
+
+class MAASRegion:
+    def __init__(
+        self,
+        url: str,
+        http_url: str,
+        host: str,
+        maas_container: str,
+        installed_from_snap: bool,
+    ):
+        self.url = url
+        self.http_url = http_url
+        self.host = host
+        self.maas_container = maas_container
+        self.installed_from_snap = installed_from_snap
+
+    def __repr__(self) -> str:
+        package = "snap" if self.installed_from_snap else "deb"
+        return (
+            f"<MAASRegion {package} at {self.url!r} in container {self.maas_container}>"
+        )
+
+    def execute(self, command: list[str]) -> subprocess.CompletedProcess[str]:
+        lxd = get_lxd(LOG)
+        return lxd.execute(self.maas_container, command)
+
+    def get_api_token(self, user: str) -> str:
+        result = self.execute(["maas", "apikey", "--username", user])
+        return result.stdout.rstrip("\n")
+
+    def user_exists(self, user: str) -> bool:
+        try:
+            self.execute(["maas", "apikey", "--username", user])
+        except subprocess.CalledProcessError:
+            return False
+        else:
+            return True
+
+    def create_user(self, user: str, password: str, email: str) -> None:
+        self.execute(
+            [
+                "maas",
+                "createadmin",
+                "--username",
+                user,
+                "--password",
+                password,
+                "--email",
+                email,
+            ]
+        )
+
+    def get_proxy_settings(self, config: dict[str, Any]) -> dict[str, Union[bool, str]]:
+        proxy_config = config.get("proxy", {})
+        http_proxy = proxy_config.get("http", "")
+        use_internal = proxy_config.get("use_internal", True)
+        use_peer_proxy = bool(http_proxy) and use_internal
+        enable_http_proxy = use_internal
+        return {
+            "http_proxy": http_proxy,
+            "use_peer_proxy": use_peer_proxy,
+            "enable_http_proxy": enable_http_proxy,
+        }
+
+    def _no_dhcp_server_running_on_network(self, network: str) -> bool:
+        device_name = "maas-ss-" + network
+        stderr = ""
+        try:
+            self.execute(
+                ["timeout", "-s", "SIGKILL", "5", "dhcp_probe", "-f", device_name]
+            )
+        except subprocess.CalledProcessError as err:
+            stderr = err.stderr
+        for line in stderr.split("\n"):
+            if line.startswith("warn:   received unexpected response"):
+                LOG.critical("DHCP currently running: %s", line)
+                return False
+        return True
+
+    def enable_dhcp(
+        self, config: dict[str, Any], client: AuthenticatedAPIClient
+    ) -> None:
+        rack_controllers = get_rack_controllers(client)
+        for network_name, network in config["networks"].items():
+            assert self._no_dhcp_server_running_on_network(
+                network_name
+            ), "There is another DHCP server running. Check logs"
+            primary_controller, link = get_dhcp_controller(
+                rack_controllers, network["cidr"]
+            )
+            dhcp_fabric = link["subnet"]["vlan"]["fabric"]
+            dhcp_vlan = link["subnet"]["vlan"]["name"]
+            client.create_ip_range(
+                network["dynamic"]["start"], network["dynamic"]["end"], "dynamic"
+            )
+            for reserved in network.get("reserved", []):
+                client.create_ip_range(
+                    reserved["start"],
+                    reserved["end"],
+                    "reserved",
+                    reserved.get("comment"),
+                )
+            client.enable_dhcp(dhcp_fabric, dhcp_vlan, primary_controller)
+
+        # Wait for the task to complete and create the dhcpd.conf file.
+        for retry_info in retries(timeout=600, delay=3):
+            if self.is_dhcp_enabled():
+                break
+        else:
+            raise AssertionError(
+                f"DHCP couldn't be enabled after {retry_info.attempt} attempts"
+                f" over {retry_info.elapsed} seconds."
+            )
+
+    def disable_dhcp(
+        self, config: dict[str, Any], client: AuthenticatedAPIClient
+    ) -> None:
+        rack_controllers = get_rack_controllers(client)
+        ip_ranges = client.list_ip_ranges()
+        for network in config["networks"].values():
+            primary_controller, link = get_dhcp_controller(
+                rack_controllers, network["cidr"]
+            )
+            dhcp_fabric = link["subnet"]["vlan"]["fabric"]
+            dhcp_vlan = link["subnet"]["vlan"]["name"]
+            client.disable_dhcp(dhcp_fabric, dhcp_vlan)
+            for ip_range in ip_ranges:
+                if ip_range["start_ip"] == network["dynamic"]["start"]:
+                    client.delete_ip_range(ip_range["id"])
+                for reserved in network.get("reserved", []):
+                    if ip_range["start_ip"] == reserved["start"]:
+                        client.delete_ip_range(ip_range["id"])
+        # Wait for the task to complete and create the dhcpd.conf file.
+        for retry_info in retries(timeout=600, delay=3):
+            if not self.is_dhcp_enabled():
+                break
+        else:
+            raise AssertionError(
+                f"DHCP couldn't be disabled after {retry_info.attempt} attempts"
+                f" over {retry_info.elapsed} seconds."
+            )
+
+    def is_dhcp_enabled(self) -> bool:
+        dhcpd_conf_path = "/var/lib/maas/dhcpd.conf"
+        if self.installed_from_snap:
+            dhcpd_conf_path = "/var/snap/maas/common/maas/dhcpd.conf"
+        lxd = get_lxd(LOG)
+        return lxd.file_exists(self.maas_container, dhcpd_conf_path)
+
+    def set_config(self, key: str, value: str = "") -> None:
+        if self.installed_from_snap:
+            prefix_cmd = ["maas", "config"]
+        else:
+            prefix_cmd = ["maas-region", "local_config_set"]
+        cmd = prefix_cmd + [f"--{key}", value]
+        self.execute(cmd)
+
+    def restart(self) -> None:
+        if self.installed_from_snap:
+            cmd = ["snap", "restart", "maas"]
+        else:
+            cmd = ["systemctl", "restart", "maas-regiond"]
+        self.execute(cmd)
+
+    def enable_debug(self) -> None:
+        if self.installed_from_snap:
+            self.execute(["maas", "config", "--enable-debug"])
+        else:
+            self.set_config("debug", "True")
+
+
+def get_dhcp_controller(
+    rack_controllers: list[dict[str, Any]], cidr: str
+) -> tuple[dict[str, Any], dict[str, Any]]:
+    for rack_controller in rack_controllers:
+        for interface in rack_controller["interface_set"]:
+            for link in interface["links"]:
+                if link["subnet"]["cidr"] == cidr:
+                    return rack_controller, link
+    raise AssertionError(f"Couldn't find rack controller managing DHCP for {cidr}")
+
+
+def get_rack_controllers(client: AuthenticatedAPIClient) -> list[dict[str, Any]]:
+    """Repeatedly attempt to get rack controllers"""
+    for retry_info in retries(timeout=300, delay=10):
+        rack_controllers = client.list_rack_controllers()
+        if rack_controllers:
+            return rack_controllers
+    else:
+        raise AssertionError(
+            f"No rack controllers found after {retry_info.attempt} attempts"
+            f" over {retry_info.elapsed} seconds"
+        )
diff --git a/systemtests/state.py b/systemtests/state.py
new file mode 100644
index 0000000..5c55070
--- /dev/null
+++ b/systemtests/state.py
@@ -0,0 +1,238 @@
+from __future__ import annotations
+
+import json
+import time
+from logging import getLogger
+from typing import TYPE_CHECKING, Any, Iterator, Set, cast
+
+import pytest
+from retry import retry
+
+from .region import get_rack_controllers
+from .utils import waits_for_event_after
+
+if TYPE_CHECKING:
+    from logging import Logger
+
+    from .api import AuthenticatedAPIClient, UnauthenticatedMAASAPIClient
+    from .machine_config import MachineConfig
+    from .region import MAASRegion
+
+LOG = getLogger("systemtests.state")
+
+
+@pytest.fixture(scope="session")
+def authenticated_admin(
+    maas_credentials: dict[str, str],
+    unauthenticated_maas_api_client: UnauthenticatedMAASAPIClient,
+) -> AuthenticatedAPIClient:
+    token = maas_credentials["api_key"]
+
+    @retry(tries=5, delay=2, logger=LOG)
+    def _retry_log_in(session_name: str, token: str) -> AuthenticatedAPIClient:
+        _, api_client = unauthenticated_maas_api_client.log_in(session_name, token)
+        return api_client
+
+    api_client = _retry_log_in("maas_under_test", token)
+    version_information = api_client.read_version_information()
+    LOG.info(
+        "MAAS Version: {version}, subversion: {subversion}".format(
+            **version_information
+        )
+    )
+    return api_client
+
+
+@pytest.fixture(scope="session")
+def import_images_and_wait_until_synced(
+    authenticated_admin: AuthenticatedAPIClient,
+    config: dict[str, Any],
+) -> None:
+    architectures = set()
+    osystems = set()
+    for machine, power_config in config.get("machines", {}).get("hardware", {}).items():
+        architectures.add(power_config.get("architecture", "amd64"))
+        osystems.add(power_config.get("osystem", "ubuntu"))
+
+    started_importing_regex = "^Started importing of boot images from"
+    # Sometimes the region has already started importing images, we
+    # stop it before we configure the architectures to ensure we get
+    # all the images we need ASAP.
+
+    # TODO Arguably MAAS should do this for us!
+    if authenticated_admin.is_importing_boot_resources():
+        authenticated_admin.stop_importing_boot_resources()
+    with waits_for_event_after(
+        authenticated_admin,
+        event_type="Region import info",
+        description_regex=started_importing_regex,
+    ):
+        authenticated_admin.update_architectures_in_boot_source_selections(
+            architectures
+        )
+        authenticated_admin.import_boot_resources()
+    windows_path = None
+    if "windows" in osystems:
+        windows_path = "/home/ubuntu/windows-win2012hvr2-amd64-root-dd"
+        # 1000/1000 is default uid/gid of ubuntu user
+        authenticated_admin.api_client.push_file(
+            source_file=config["windows_image_file_path"],
+            target_file=windows_path,
+            uid=1000,
+            gid=1000,
+        )
+
+    region_start_point = time.time()
+    while authenticated_admin.is_importing_boot_resources():
+        LOG.debug("Sleeping for 10s for region image import")
+        time.sleep(10)
+
+    region_time_taken = time.time() - region_start_point
+    LOG.info(
+        f"Took {region_time_taken:0.1f}s for region to complete importing "
+        f"{len(osystems)} OSs across {len(architectures)} architectures"
+    )
+    if "windows" in osystems:
+        windows_start_point = time.time()
+        windows_path = cast(str, windows_path)
+        authenticated_admin.create_boot_resource(
+            name="windows/win2012hvr2",
+            title="Windows2012HVR2",
+            architecture="amd64/generic",
+            filetype="ddtgz",
+            image_file_path=windows_path,
+        )
+        windows_time_taken = time.time() - windows_start_point
+        LOG.info(f"Took {windows_time_taken:0.1f}s to upload Windows")
+    rack_start_point = time.time()
+    for rack_controller in get_rack_controllers(authenticated_admin):
+        boot_images = authenticated_admin.list_boot_images(rack_controller)
+        while boot_images["status"] != "synced":
+            LOG.debug("Sleeping for 30s to wait for rack to finish importing images")
+            time.sleep(30)
+            boot_images = authenticated_admin.list_boot_images(rack_controller)
+            if boot_images["status"] == "out-of-sync":
+                authenticated_admin.import_boot_resources_in_rack(rack_controller)
+    rack_time_taken = time.time() - rack_start_point
+    LOG.info(
+        f"Took {rack_time_taken:0.1f}s for rack(s) to complete importing "
+        f"{len(osystems)} OSs across {len(architectures)} architectures"
+    )
+
+
+@pytest.fixture(scope="session")
+def configured_maas(
+    maas_region: MAASRegion,
+    authenticated_admin: AuthenticatedAPIClient,
+    config: dict[str, Any],
+) -> Iterator[MAASRegion]:
+    settings = maas_region.get_proxy_settings(config)
+    settings.update(config["maas"].get("config", {}))
+    for key, value in settings.items():
+        if type(value) is not str:
+            value = json.dumps(value)
+        authenticated_admin.execute(
+            ["maas", "set-config", "name=" + key, "value=" + value], json_output=False
+        )
+    if config.get("o11y"):
+        authenticated_admin.execute(
+            ["maas", "set-config", "name=promtail_port", "value=5238"],
+            json_output=False,
+        )
+        authenticated_admin.execute(
+            ["maas", "set-config", "name=promtail_enabled", "value=true"],
+            json_output=False,
+        )
+
+    authenticated_admin.execute(
+        [
+            "domain",
+            "update",
+            "0",
+            f"name={config['maas'].get('domain_name', 'systemtests')}",
+        ]
+    )
+    subnets = authenticated_admin.list_subnets()
+    if "test-subnet" in [subnet["name"] for subnet in subnets]:
+        authenticated_admin.delete_subnet("test-subnet")
+    if maas_region.is_dhcp_enabled():
+        maas_region.disable_dhcp(config, authenticated_admin)
+    yield maas_region
+
+
+def all_rack_controllers_commissioned(
+    logger: Logger, admin: AuthenticatedAPIClient
+) -> bool:
+    for rack in get_rack_controllers(admin):
+        status = rack["commissioning_status"]
+        status_name = rack["commissioning_status_name"]
+        logger.debug(
+            f"Rack controller {rack['hostname']} is {status_name} commissioning"
+        )
+        # FAILED = 3, TIMEDOUT = 4, FAILED_INSTALLING = 8, FAILED_APPLYING_NETCONF = 11
+        assert status not in {3, 4, 8, 11}, "Failed to commission rack"
+        # PENDING = 0, RUNNING = 1, PASSED = 2
+        if status < 2:
+            return False
+    return True
+
+
+@pytest.fixture(scope="session")
+def ready_maas(
+    authenticated_admin: AuthenticatedAPIClient,
+    configured_maas: MAASRegion,
+    config: dict[str, Any],
+) -> Iterator[MAASRegion]:
+    while not all_rack_controllers_commissioned(LOG, authenticated_admin):
+        LOG.debug("Sleeping for 5s to wait for rack(s) to finish commissioning")
+        time.sleep(5)
+    if not configured_maas.is_dhcp_enabled():
+        configured_maas.enable_dhcp(config, authenticated_admin)
+    yield configured_maas
+
+
+@pytest.fixture(scope="module")
+def ready_remote_maas(
+    authenticated_admin: AuthenticatedAPIClient,
+    needed_architectures: Set[str],
+) -> None:
+    while not all_rack_controllers_commissioned(LOG, authenticated_admin):
+        LOG.debug("Sleeping for 5s to wait for rack(s) to finish commissioning")
+        time.sleep(5)
+    assert authenticated_admin.is_dhcp_enabled()
+
+    for rack_controller in authenticated_admin.list_rack_controllers():
+        boot_images = authenticated_admin.list_boot_images(rack_controller)
+
+        bootloaders_arches = {
+            image["architecture"]
+            for image in boot_images["images"]
+            if image["name"].startswith("bootloader/")
+        }
+        image_arches = {
+            image["architecture"]
+            for image in boot_images["images"]
+            if not image["name"].startswith("bootloader/")
+        }
+        assert needed_architectures <= bootloaders_arches
+        assert needed_architectures <= image_arches
+        assert boot_images["status"] == "synced"
+
+
+# FIXME: scope should be function, and use: @pytest_steps.cross_steps_fixture
+# but didnt work with the tear_down, pytest_step should support it:
+# https://github.com/smarie/python-pytest-steps/blob/5cfd9c45d2383bce9276a2818da1ebad1ae39db8/pytest_steps/steps.py#L221
+# This could be an issue when testing several images on the same machine
+@pytest.fixture(scope="module")
+def maas_without_machine(
+    authenticated_admin: AuthenticatedAPIClient,
+    machine_config: MachineConfig,
+) -> Iterator[None]:
+    assert (
+        authenticated_admin.list_machines(mac_address=machine_config.mac_address) == []
+    )
+    yield
+    if machines := authenticated_admin.list_machines(
+        mac_address=machine_config.mac_address
+    ):
+        authenticated_admin.delete_machine(machine=machines[0])
diff --git a/systemtests/subprocess.py b/systemtests/subprocess.py
new file mode 100644
index 0000000..51ef9a5
--- /dev/null
+++ b/systemtests/subprocess.py
@@ -0,0 +1,70 @@
+from __future__ import annotations
+
+import io
+import selectors
+import subprocess
+from select import PIPE_BUF
+from typing import TYPE_CHECKING, Optional
+
+if TYPE_CHECKING:
+    import logging
+
+
+def run_with_logging(
+    cmd: list[str],
+    logger: logging.Logger,
+    env: Optional[dict[str, str]] = None,
+    prefix: Optional[list[str]] = None,
+) -> subprocess.CompletedProcess[str]:
+    __tracebackhide__ = True
+    if prefix is None:
+        prefix = []
+    logger.info("┌ " + " ".join(repr(arg) if "\n" in arg else arg for arg in cmd))
+    process = subprocess.Popen(
+        prefix + cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.PIPE,
+        encoding="utf-8",
+        errors="backslashreplace",
+        env=env,
+    )
+    stdout = io.StringIO()
+    stderr = io.StringIO()
+    # So mypy knows we've got real things
+    assert process.stdout is not None
+    assert process.stderr is not None
+    sel = selectors.DefaultSelector()
+    sel.register(process.stdout, selectors.EVENT_READ, ("|", stdout.write))
+    sel.register(process.stderr, selectors.EVENT_READ, ("|E", stderr.write))
+    with sel:
+        while sel.get_map():
+            for select, mask in sel.select():
+                log_prefix, accumulator = select.data
+                fileobj = select.fileobj
+                # so mypy knows it's a TextIoWrapper
+                assert isinstance(fileobj, io.TextIOWrapper)
+                # read as much as we can from the pipe without blocking
+                line = fileobj.readline(PIPE_BUF)
+                if not line:
+                    sel.unregister(fileobj)
+                    continue
+                if line[-1] == "\n":
+                    # A whole line, strip the newline
+                    log_line = log_prefix + line[:-1]
+                else:
+                    # A partial line, should be PIPE_BUF long -
+                    # indicate to log reader that it continues on the next line
+                    log_line = log_prefix + line + "…"
+                logger.info(log_line)
+                accumulator(line)
+
+    returncode = process.wait()
+    if returncode != 0:
+        logger.warning(f"└ ❌ Return code: {returncode}")
+        raise subprocess.CalledProcessError(
+            returncode, prefix + cmd, stdout.getvalue(), stderr.getvalue()
+        )
+    logger.info("└ ✔")
+    return subprocess.CompletedProcess(
+        process.args, returncode, stdout.getvalue(), stderr.getvalue()
+    )
diff --git a/systemtests/tests_per_machine/__init__.py b/systemtests/tests_per_machine/__init__.py
new file mode 100644
index 0000000..b847735
--- /dev/null
+++ b/systemtests/tests_per_machine/__init__.py
@@ -0,0 +1 @@
+"""Contains tests that are per machine, run in parallel by tox for efficiency."""
diff --git a/systemtests/tests_per_machine/test_hardware_sync.py b/systemtests/tests_per_machine/test_hardware_sync.py
new file mode 100644
index 0000000..05533c3
--- /dev/null
+++ b/systemtests/tests_per_machine/test_hardware_sync.py
@@ -0,0 +1,252 @@
+from __future__ import annotations
+
+from contextlib import contextmanager, suppress
+from functools import partial
+from itertools import chain, repeat
+from subprocess import CalledProcessError, CompletedProcess
+from typing import TYPE_CHECKING, Callable, Iterator
+
+from pytest_steps import test_steps
+from pytest_steps.steps_generator import optional_step
+from retry.api import retry, retry_call
+
+from ..device_config import DeviceConfig, HardwareSyncMachine
+from ..lxd import get_lxd
+from ..utils import (
+    retries,
+    ssh_execute_command,
+    wait_for_machine,
+    wait_for_machine_to_power_off,
+)
+
+if TYPE_CHECKING:
+    from logging import Logger
+
+    from paramiko import PKey
+
+    from ..api import AuthenticatedAPIClient, Machine
+    from ..lxd import CLILXD
+
+
+def assert_device_not_added_to_lxd_instance(
+    lxd: CLILXD,
+    machine_name: str,
+    device_config: DeviceConfig,
+) -> None:
+    assert device_config["device_name"] not in lxd.list_instance_devices(machine_name)
+
+
+def _check_machine_for_device(
+    maas_api_client: AuthenticatedAPIClient,
+    machine: Machine,
+    device_config: DeviceConfig,
+) -> bool:
+    machine = maas_api_client.read_machine(machine)
+    device_type = device_config["type"]
+    if device_type == "disk":
+        disks = [blockdevice["serial"] for blockdevice in machine["blockdevice_set"]]
+        return (
+            device_config["device_name"] in disks
+            or "lxd_" + device_config["device_name"] in disks
+        )
+    elif device_type == "nic":
+        return device_config["hwaddr"] in [
+            iface["mac_address"] for iface in machine["interface_set"]
+        ]
+    elif device_type == "usb":
+        devices = maas_api_client.read_machine_devices(machine)
+        return device_config["productid"] in [dev.get("product_id") for dev in devices]
+    elif device_type == "pci":
+        devices = maas_api_client.read_machine_devices(machine)
+        return device_config["address"] in [dev.get("pci_address") for dev in devices]
+    raise ValueError("unknown device type")
+
+
+def check_machine_for_device(
+    maas_api_client: AuthenticatedAPIClient,
+    machine: Machine,
+    device_config: DeviceConfig,
+) -> None:
+    assert _check_machine_for_device(maas_api_client, machine, device_config)
+
+
+def check_machine_does_not_have_device(
+    maas_api_client: AuthenticatedAPIClient,
+    machine: Machine,
+    device_config: DeviceConfig,
+) -> None:
+    assert not _check_machine_for_device(maas_api_client, machine, device_config)
+
+
+@contextmanager
+def powered_off_vm(lxd: CLILXD, instance_name: str) -> Iterator[None]:
+    """Context-manager to do something with LXD instance off."""
+    strategy_iter: Iterator[Callable[[str], CompletedProcess[str]]] = chain(
+        repeat(lxd.stop, 3), repeat(partial(lxd.stop, force=True))
+    )
+
+    @retry(tries=10, delay=5, logger=lxd.logger)
+    def power_off(instance_name: str) -> None:
+        if lxd.is_running(instance_name):
+            stop_attempt = next(strategy_iter)
+            stop_attempt(instance_name)
+            if lxd.is_running(instance_name):
+                raise Exception(f"LXD {instance_name} still running.")
+
+    @retry(tries=10, delay=5, backoff=1.2, logger=lxd.logger)
+    def power_on(instance_name: str) -> None:
+        lxd.start(instance_name)
+
+    power_off(instance_name)
+    yield
+    power_on(instance_name)
+
+
+@test_steps(
+    "deploy",
+    "add_device",
+    "release",
+    "redeploy",
+    "remove_device",
+    "cleanup",
+)
+def test_hardware_sync(
+    maas_api_client: AuthenticatedAPIClient,
+    hardware_sync_machine: HardwareSyncMachine,
+    ready_remote_maas: None,
+    ssh_key: PKey,
+    testlog: Logger,
+) -> Iterator[None]:
+    lxd = get_lxd(logger=testlog)
+
+    maas_api_client.set_config("hardware_sync_interval", "5s")
+
+    maas_api_client.deploy_machine(
+        hardware_sync_machine.machine,
+        osystem="ubuntu",
+        distro_series="focal",
+        enable_hw_sync="true",
+    )
+
+    hardware_sync_machine.machine = wait_for_machine(
+        maas_api_client,
+        hardware_sync_machine.machine,
+        status="Deployed",
+        abort_status="Failed deployment",
+        machine_id=hardware_sync_machine.name,
+        # Bump timeout to 40m since VMs are slower
+        timeout=40 * 60,
+    )
+
+    stdout = ssh_execute_command(
+        hardware_sync_machine.machine, "ubuntu", ssh_key, "cat /etc/cloud/build.info"
+    )
+    testlog.info(f"{hardware_sync_machine.name}: {stdout}")
+
+    yield
+
+    with optional_step("add_device") as add_device:
+        # we don't have a way of remotely adding physical hardware,
+        # so this test only tests lxd instances
+        assert hardware_sync_machine.machine["power_type"] == "lxd"
+
+        for device_config in hardware_sync_machine.devices_config:
+            assert_device_not_added_to_lxd_instance(
+                lxd, hardware_sync_machine.name, device_config
+            )
+
+        with powered_off_vm(lxd, hardware_sync_machine.name):
+            for device_config in hardware_sync_machine.devices_config:
+                lxd.add_instance_device(
+                    hardware_sync_machine.name,
+                    device_config["device_name"],
+                    device_config,
+                )
+
+        for device_config in hardware_sync_machine.devices_config:
+            retry_call(
+                check_machine_for_device,
+                fargs=[maas_api_client, hardware_sync_machine.machine, device_config],
+                tries=10,
+                delay=10,
+                backoff=1.2,
+                logger=testlog,
+            )
+
+    yield add_device
+
+    with optional_step("release", depends_on=add_device) as release:
+        if release.should_run():
+            maas_api_client.release_machine(hardware_sync_machine.machine)
+            wait_for_machine(
+                maas_api_client,
+                hardware_sync_machine.machine,
+                status="Ready",
+                abort_status="Releasing failed",
+                machine_id=hardware_sync_machine.name,
+                timeout=40 * 60,
+            )
+            for device_config in hardware_sync_machine.devices_config:
+                check_machine_for_device(
+                    maas_api_client, hardware_sync_machine.machine, device_config
+                )
+
+            wait_for_machine_to_power_off(
+                maas_api_client,
+                hardware_sync_machine.machine,
+                hardware_sync_machine.name,
+            )
+    yield release
+
+    with optional_step("redeploy", depends_on=release) as redeploy:
+        if redeploy.should_run():
+            maas_api_client.deploy_machine(
+                hardware_sync_machine.machine,
+                osystem="ubuntu",
+                distro_series="focal",
+                enable_hw_sync="true",
+            )
+
+            wait_for_machine(
+                maas_api_client,
+                hardware_sync_machine.machine,
+                status="Deployed",
+                abort_status="Failed deployment",
+                machine_id=hardware_sync_machine.name,
+                timeout=40 * 60,
+            )
+
+    yield redeploy
+
+    with optional_step("remove_device", depends_on=add_device) as remove_device:
+        if remove_device.should_run():
+            with powered_off_vm(lxd, hardware_sync_machine.name):
+                for device_config in hardware_sync_machine.devices_config:
+                    lxd.remove_instance_device(
+                        hardware_sync_machine.name, device_config["device_name"]
+                    )
+
+            for device_config in hardware_sync_machine.devices_config:
+                retry_call(
+                    check_machine_does_not_have_device,
+                    fargs=[
+                        maas_api_client,
+                        hardware_sync_machine.machine,
+                        device_config,
+                    ],
+                    tries=10,
+                    delay=5,
+                    backoff=1.2,
+                    logger=testlog,
+                )
+
+    yield remove_device
+
+    for retry_info in retries(60, 5):
+        power_state = maas_api_client.query_power_state(hardware_sync_machine.machine)
+        if power_state == "off":
+            break
+        elif power_state == "on":
+            with suppress(CalledProcessError):
+                lxd.stop(hardware_sync_machine.name, force=True)
+    yield
diff --git a/systemtests/tests_per_machine/test_machine.py b/systemtests/tests_per_machine/test_machine.py
new file mode 100644
index 0000000..f8b42ca
--- /dev/null
+++ b/systemtests/tests_per_machine/test_machine.py
@@ -0,0 +1,173 @@
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Iterator
+
+import pytest
+from pytest_steps import test_steps
+from retry.api import retry_call
+
+from ..utils import (
+    IPRange,
+    assert_machine_in_machines,
+    assert_machine_not_in_machines,
+    ssh_execute_command,
+    wait_for_machine,
+    wait_for_machine_to_power_off,
+    wait_for_new_machine,
+)
+
+if TYPE_CHECKING:
+    from logging import Logger
+
+    from paramiko import PKey
+
+    from ..api import AuthenticatedAPIClient
+    from ..machine_config import MachineConfig
+
+
+@test_steps("enlist", "metadata", "commission", "deploy", "rescue")
+def test_full_circle(
+    maas_api_client: AuthenticatedAPIClient,
+    machine_config: MachineConfig,
+    ready_remote_maas: None,
+    maas_without_machine: None,
+    ssh_key: PKey,
+    testlog: Logger,
+    zone: str,
+    pool: str,
+    tag_all: str,
+) -> Iterator[None]:
+
+    maas_ip_ranges = maas_api_client.list_ip_ranges()
+    reserved_ranges = []
+    dynamic_range = None
+    for ip_range in maas_ip_ranges:
+        if ip_range["type"] == "reserved":
+            reserved_ranges.append(
+                IPRange.from_strs(ip_range["start_ip"], ip_range["end_ip"])
+            )
+        elif ip_range["type"] == "dynamic":
+            dynamic_range = IPRange.from_strs(ip_range["start_ip"], ip_range["end_ip"])
+    assert dynamic_range is not None, "Dynamic range not found."
+
+    power_cycle_argv = [
+        "maas.power",
+        "cycle",
+    ] + machine_config.maas_power_cmd_power_parameters
+
+    retry_call(
+        maas_api_client.api_client.execute,
+        fargs=[power_cycle_argv],
+        fkwargs={"base_cmd": []},
+        tries=10,
+        delay=5,
+        backoff=1.2,
+        logger=testlog,
+    )
+
+    mac_address = machine_config.mac_address
+
+    machine = wait_for_new_machine(maas_api_client, mac_address, machine_config.name)
+
+    assert machine["ip_addresses"][0] in dynamic_range
+
+    if not machine["power_type"]:
+        machine = maas_api_client.update_machine(
+            machine, **machine_config.machine_update_power_parameters
+        )
+
+    machine = wait_for_machine_to_power_off(
+        maas_api_client, machine, machine_config.name
+    )
+    yield
+
+    assert_machine_in_machines(
+        machine, maas_api_client.list_by_tag(tag_all, "machines")
+    )
+
+    assert_machine_not_in_machines(machine, maas_api_client.list_machines(zone=zone))
+    assert_machine_not_in_machines(machine, maas_api_client.list_machines(pool=pool))
+
+    machine = maas_api_client.update_machine(machine, zone=zone, pool=pool)
+
+    assert_machine_in_machines(machine, maas_api_client.list_machines(zone=zone))
+    assert_machine_in_machines(machine, maas_api_client.list_machines(pool=pool))
+
+    yield
+
+    timeout = 40 * 60 if machine_config.power_type == "lxd" else 20 * 60
+    maas_api_client.commission_machine(machine)
+
+    wait_for_machine(
+        maas_api_client,
+        machine,
+        status="Ready",
+        abort_status="Failed commissioning",
+        machine_id=machine_config.name,
+        timeout=timeout,
+    )
+
+    assert machine["ip_addresses"][0] in dynamic_range
+    yield
+
+    maas_api_client.deploy_machine(
+        machine, osystem=machine_config.osystem, distro_series=machine_config.osystem
+    )
+
+    machine = wait_for_machine(
+        maas_api_client,
+        machine,
+        status="Deployed",
+        abort_status="Failed deployment",
+        machine_id=machine_config.name,
+        timeout=timeout,
+    )
+
+    assert machine["ip_addresses"][0] not in dynamic_range
+    for reserved_range in reserved_ranges:
+        assert machine["ip_addresses"][0] not in reserved_range
+
+    if machine_config.osystem == "ubuntu":
+        stdout = ssh_execute_command(
+            machine, "ubuntu", ssh_key, "cat /etc/cloud/build.info"
+        )
+        testlog.info(f"{machine_config.name}: {stdout}")
+    yield
+
+    if machine_config.osystem == "windows":
+        # We need ssh access in order to test rescue mode.
+        pytest.skip("rescue mode not tested in windows.")
+
+    ssh_execute_command(machine, "ubuntu", ssh_key, "echo test > ./test")
+    ssh_execute_command(machine, "ubuntu", ssh_key, "sudo sync")
+    maas_api_client.rescue_machine(machine)
+
+    machine = maas_api_client.read_machine(machine)  # IP could change after deploy
+
+    # Given that the machine is now in rescue mode, $HOME should be empty
+    stdout = ssh_execute_command(machine, "ubuntu", ssh_key, "ls")
+    assert stdout == ""
+
+    # / should only be on OverlayFS mount in the ephemeral environment.
+    stdout = ssh_execute_command(machine, "ubuntu", ssh_key, "mount")
+    assert "overlayroot on / type overlay" in stdout
+
+    maas_api_client.exit_rescue_machine(machine, next_status="Deployed")
+
+    machine = maas_api_client.read_machine(machine)  # IP could change after deploy
+    stdout = ssh_execute_command(machine, "ubuntu", ssh_key, "ls")
+    assert stdout.strip() == "test"
+
+    stdout = ssh_execute_command(machine, "ubuntu", ssh_key, "mount")
+    assert "overlayroot on / " not in stdout
+
+    maas_api_client.release_machine(machine)
+    wait_for_machine(
+        maas_api_client,
+        machine,
+        status="Ready",
+        abort_status="Releasing failed",
+        machine_id=machine_config.name,
+        timeout=timeout,
+    )
+    yield
diff --git a/systemtests/utils.py b/systemtests/utils.py
new file mode 100644
index 0000000..c120dbe
--- /dev/null
+++ b/systemtests/utils.py
@@ -0,0 +1,292 @@
+from __future__ import annotations
+
+import contextlib
+import ipaddress
+import random
+import re
+import string
+import time
+from dataclasses import dataclass
+from typing import Iterator, Optional, TypedDict, Union
+
+import paramiko
+from retry.api import retry_call
+
+from . import api
+
+
+class UnexpectedMachineStatus(Exception):
+    """Raised when we run out of time waiting for machines to be in a given state.
+
+    Will report on the desired state, the time we waited for and debug outputs.
+    """
+
+    def __init__(
+        self,
+        identifier: str,
+        expected_status: str,
+        elapsed_time: float,
+        debug: list[str],
+    ):
+        self._identifier = identifier
+        self._status = expected_status
+        self._elapsed = format(elapsed_time, ".01f")
+        self._debug = "\n".join(debug)
+
+    def __str__(self) -> str:
+        return f"""\
+Machine {self._identifier} didn't get to {self._status} after {self._elapsed} seconds.
+
+Debug information:
+{self._debug}
+"""
+
+
+def randomstring(length: int = 10) -> str:
+    """Return a random string."""
+    return "".join(random.choice(string.ascii_lowercase) for _ in range(length))
+
+
+@dataclass
+class RetryInfo:
+    attempt: int
+    elapsed: float
+    remaining: float
+
+
+def retries(
+    timeout: Union[int, float] = 30, delay: Union[int, float] = 1
+) -> Iterator[RetryInfo]:
+    """Helper for retrying something, sleeping between attempts.
+
+    Yields RetryInfo(attempt, elapsed, remaining), giving times in seconds.
+
+    @param timeout: From now, how long to keep iterating, in seconds.
+    @param delay: The sleep between each iteration, in seconds.
+    """
+    start = time.time()
+    end = start + timeout
+    for attempt, now in enumerate(iter(time.time, None), start=1):
+        if now < end:
+            yield RetryInfo(attempt, now - start, end - now)
+            time.sleep(min(delay, end - now))
+        else:
+            break
+
+
+class Event(TypedDict):
+
+    username: str
+    node: str
+    hostname: str
+    id: int
+    level: str
+    created: str
+    type: str
+    description: str
+
+
+class EventPage(TypedDict):
+
+    count: int
+    events: list[Event]
+    next_uri: str
+    prev_uri: str
+
+
+@contextlib.contextmanager
+def waits_for_event_after(
+    api_client: api.AuthenticatedAPIClient, event_type: str, description_regex: str
+) -> Iterator[None]:
+    """
+    Helper to wait for an event that is emitted after an action
+    initiated by the body
+    """
+    api_client.logger.debug("Getting latest debug event for watermark")
+    last_events: EventPage = api_client.execute(
+        ["events", "query", "level=DEBUG", "limit=1"]
+    )
+    latest_event_id: Optional[int]
+    if last_events["count"] == 0:
+        latest_event_id = None
+    else:
+        latest_event_id = last_events["events"][0]["id"]
+    api_client.logger.debug(f"Latest debug event for watermark is {latest_event_id}")
+    yield
+    has_started: bool = False
+    while not has_started:
+        events_query = ["events", "query", "level=DEBUG"]
+        if latest_event_id:
+            events_query.append(f"after={latest_event_id}")
+        events: EventPage = api_client.execute(events_query)
+        matching_events = (
+            event for event in events["events"] if event["type"] == event_type
+        )
+        has_started = any(
+            re.match(description_regex, event["description"])
+            for event in matching_events
+        )
+        if not has_started:
+            api_client.logger.debug("Start event not found, sleeping for a second")
+            time.sleep(1)
+    api_client.logger.debug("Start event found!")
+
+
+# XXX: Move to api.py
+def debug_last_events(
+    api_client: api.AuthenticatedAPIClient, machine_id: str
+) -> EventPage:
+    """Log the latest events for the given machine."""
+    events: EventPage = api_client.execute(
+        ["events", "query", f"id={machine_id}", "level=DEBUG", "limit=20"]
+    )
+    api_client.logger.info(f"Latest events for {machine_id} are:")
+    for event in reversed(events["events"]):
+        api_client.logger.info(
+            f"- {event['type']}"
+            + (f": {event['description']}" if event["description"] else "")
+        )
+    return events
+
+
+# XXX: Move to api.py
+def wait_for_machine(
+    api_client: api.AuthenticatedAPIClient,
+    machine: api.Machine,
+    status: str,
+    abort_status: Optional[str] = None,
+    machine_id: Optional[str] = None,
+    timeout: float = 10 * 60,
+    delay: float = 30,
+) -> api.Machine:
+    """Blocks execution until machine reaches given status."""
+    __tracebackhide__ = True
+    if machine_id is None:
+        machine_id = machine["hostname"]
+    quiet_client = api.QuietAuthenticatedAPIClient.from_api_client(api_client)
+    for retry_info in retries(timeout, delay):
+        tmp_machine = quiet_client.read_machine(machine)
+        current_status = tmp_machine["status_name"]
+        if current_status == status:
+            return tmp_machine
+        else:
+            # Deliberately overwrite this variable so the last one is kept
+            events = debug_last_events(quiet_client, machine["system_id"])
+            if current_status == abort_status:
+                break
+    debug_outputs = []
+    debug_outputs.append(f"status: {tmp_machine['status_name']}")
+    debug_outputs.extend(
+        f"- {event['type']}"
+        + (f": {event['description']}" if event["description"] else "")
+        for event in reversed(events["events"])
+    )
+    raise UnexpectedMachineStatus(machine_id, status, retry_info.elapsed, debug_outputs)
+
+
+# XXX: Move to api.py
+def wait_for_new_machine(
+    api_client: api.AuthenticatedAPIClient, mac_address: str, machine_name: str
+) -> api.Machine:
+    """Blocks execution until a machine with the given mac_address appears as New."""
+    __tracebackhide__ = True
+    quiet_client = api.QuietAuthenticatedAPIClient.from_api_client(api_client)
+    for retry_info in retries(30 * 60, 30):
+        machines = quiet_client.list_machines(mac_address=mac_address, status="new")
+        if machines:
+            return machines[0]
+
+    # We've run out of time for the machine to show up
+    debug_outputs = []
+    maybe_machines = quiet_client.list_machines(mac_address=mac_address)
+    debug_outputs.append(repr(maybe_machines))
+    if "lxd" in [m["power_type"] for m in machines]:
+        debug_outputs.append(repr(api_client.lxd.list_instances()))
+
+    raise UnexpectedMachineStatus(
+        machine_name, "New", retry_info.elapsed, debug_outputs
+    )
+
+
+def wait_for_machine_to_power_off(
+    api_client: api.AuthenticatedAPIClient, machine: api.Machine, machine_name: str
+) -> api.Machine:
+    """Blocks execution until the given machine is powered off."""
+    __tracebackhide__ = True
+    quiet_client = api.QuietAuthenticatedAPIClient.from_api_client(api_client)
+    for retry_info in retries(10 * 60, 10):
+        power_state = quiet_client.query_power_state(machine)
+        if power_state == "off":
+            quiet_client.logger.debug(f"{machine_name} is powered off, continuing")
+            return machine
+        else:
+            quiet_client.logger.debug(
+                f"{machine_name} is {power_state}, waiting for it to power off"
+            )
+
+    debug_outputs = [repr(machine)]
+    if machine["power_type"] == "lxd":
+        debug_outputs.append(repr(api_client.lxd.list_instances()[machine_name]))
+    raise UnexpectedMachineStatus(
+        machine_name, "power_off", retry_info.elapsed, debug_outputs
+    )
+
+
+def ssh_execute_command(
+    machine: api.Machine, username: str, pkey: paramiko.PKey, command: str
+) -> str:
+    """Connect to machine, execute command and return stdout."""
+    __tracebackhide__ = True
+    client = paramiko.SSHClient()
+    client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
+
+    host = machine["ip_addresses"][0]
+
+    def _connect() -> None:
+        client.connect(hostname=host, username=username, pkey=pkey, look_for_keys=False)
+
+    with client:
+        retry_call(
+            _connect,
+            tries=60 * 30,  # 30 minutes.
+            delay=5,
+            backoff=1,
+        )
+
+        _, stdout, _ = client.exec_command(command)
+        output = stdout.read().decode("utf8")
+    return output
+
+
+def assert_machine_in_machines(
+    machine: api.Machine, machines: list[api.Machine]
+) -> None:
+    __tracebackhide__ = True
+    system_ids = [m["system_id"] for m in machines]
+    assert machine["system_id"] in system_ids
+
+
+def assert_machine_not_in_machines(
+    machine: api.Machine, machines: list[api.Machine]
+) -> None:
+    __tracebackhide__ = True
+    system_ids = [m["system_id"] for m in machines]
+    assert machine["system_id"] not in system_ids
+
+
+@dataclass
+class IPRange:
+    start: ipaddress.IPv4Address
+    end: ipaddress.IPv4Address
+
+    @classmethod
+    def from_strs(cls, start: str, end: str) -> "IPRange":
+        start_ip = ipaddress.ip_address(start)
+        end_ip = ipaddress.ip_address(end)
+        assert isinstance(start_ip, ipaddress.IPv4Address)
+        assert isinstance(end_ip, ipaddress.IPv4Address)
+        return cls(start_ip, end_ip)
+
+    def __contains__(self, ip: Union[str, ipaddress.IPv4Address]) -> bool:
+        ip = ipaddress.IPv4Address(ip)
+        return self.start <= ip <= self.end
diff --git a/systemtests/vault.py b/systemtests/vault.py
new file mode 100644
index 0000000..21b15a4
--- /dev/null
+++ b/systemtests/vault.py
@@ -0,0 +1,151 @@
+import json
+import subprocess
+import uuid
+from contextlib import suppress
+from dataclasses import dataclass
+from functools import cached_property
+from textwrap import dedent
+from typing import Any, cast
+
+from .lxd import CLILXD
+from .utils import retries
+
+
+@dataclass
+class Vault:
+    """Vault CLI wrapper to be run inside a container."""
+
+    container: str
+    secrets_path: str
+    secrets_mount: str
+    lxd: CLILXD
+    root_token: str = ""
+
+    MAAS_POLICY_NAME = "maas-controller"
+    TOKEN_TTL = "5m"
+
+    @cached_property
+    def addr(self) -> str:
+        """The Vault address."""
+        fqdn = self.lxd.quietly_execute(
+            self.container, ["hostname", "-f"]
+        ).stdout.strip()
+        return f"https://{fqdn}:8200";
+
+    def wait_ready(self) -> dict[str, Any]:
+        """Wait until Vault API is running, and return the status."""
+        __tracebackhide__ = True
+        for _ in retries(timeout=5, delay=0.5):
+            with suppress(subprocess.CalledProcessError):
+                return self.status()
+
+        raise RuntimeError("Vault never became ready")
+
+    def status(self) -> dict[str, Any]:
+        """Return the Vault status."""
+        # don't use `vault status` since it returns 2 when sealed
+        return cast(
+            dict[str, Any],
+            json.loads(
+                self.lxd.quietly_execute(
+                    self.container, ["curl", f"{self.addr}/v1/sys/seal-status"]
+                ).stdout
+            ),
+        )
+
+    def init(self) -> dict[str, Any]:
+        """Initialize Vault, saving the root token."""
+        result = self.execute_with_json("operator", "init")
+        self.root_token = result["root_token"]
+        return cast(dict[str, Any], result)
+
+    def execute(self, *command: str) -> subprocess.CompletedProcess[str]:
+        """Execute a vault CLI command."""
+        __tracebackhide__ = True
+        environment = {"VAULT_ADDR": self.addr}
+        if self.root_token:
+            environment["VAULT_TOKEN"] = self.root_token
+        return self.lxd.quietly_execute(
+            self.container,
+            ["vault"] + list(command),
+            environment=environment,
+        )
+
+    def execute_with_json(self, *command: str) -> Any:
+        """Execute a Vault CLI command and return decoded json output."""
+        __tracebackhide__ = True
+        result = self.execute(*command, "-format=json")
+        return json.loads(result.stdout)
+
+    def ensure_initialized(self) -> None:
+        """Ensure Vault is initialized and unlocked."""
+        status = self.wait_ready()
+
+        vault_init_file = "/var/snap/vault/common/init.json"
+        if status["initialized"]:
+            init_result = json.loads(
+                self.lxd.get_file_contents(self.container, vault_init_file)
+            )
+            self.root_token = init_result["root_token"]
+        else:
+            init_result = self.init()
+            self.lxd.push_text_file(
+                self.container, json.dumps(init_result), vault_init_file
+            )
+
+        while (status := self.status())["sealed"]:
+            index = status["progress"]
+            key = init_result["unseal_keys_hex"][index]
+            self.execute("operator", "unseal", key)
+
+    def ensure_setup(self) -> None:
+        """Ensure Vault is set up for MAAS use."""
+        __tracebackhide__ = True
+        auth_methods = self.execute_with_json("auth", "list")
+        if "approle/" not in auth_methods:
+            self.execute("auth", "enable", "approle")
+
+        secrets_mounts = self.execute_with_json("secrets", "list")
+        if f"{self.secrets_mount}/" in secrets_mounts:
+            self.execute("kv", "enable-versioning", self.secrets_mount)
+        else:
+            self.execute("secrets", "enable", "-path", self.secrets_mount, "kv-v2")
+
+        policy = dedent(
+            f"""\
+            path "{self.secrets_mount}/metadata/{self.secrets_path}/" {{
+              capabilities = ["list"]
+            }}
+
+            path "{self.secrets_mount}/metadata/{self.secrets_path}/*" {{
+              capabilities = ["read", "update", "delete", "list"]
+            }}
+
+            path "{self.secrets_mount}/data/{self.secrets_path}/*" {{
+              capabilities = ["read", "create", "update", "delete"]
+            }}
+            """
+        )
+        tmpfile = f"/root/vault-policy-{uuid.uuid4()}"
+        self.lxd.push_text_file(self.container, policy, tmpfile)
+        self.execute("policy", "write", self.MAAS_POLICY_NAME, tmpfile)
+        self.lxd.quietly_execute(self.container, ["rm", tmpfile])
+
+    def create_approle(self, role_name: str) -> tuple[str, str]:
+        """Create an approle with secret and return its ID and wrapped token."""
+        self.execute(
+            "write",
+            f"auth/approle/role/{role_name}",
+            f"policies={self.MAAS_POLICY_NAME}",
+            f"token_ttl={self.TOKEN_TTL}",
+        )
+        role_id = self.execute_with_json(
+            "read", f"auth/approle/role/{role_name}/role-id"
+        )["data"]["role_id"]
+        wrapped_token = self.execute_with_json(
+            "write",
+            "-f",
+            f"-wrap-ttl={self.TOKEN_TTL}",
+            f"auth/approle/role/{role_name}/secret-id",
+        )["wrap_info"]["token"]
+        return role_id, wrapped_token
diff --git a/tls_certificates/cacerts.pem b/tls_certificates/cacerts.pem
new file mode 100644
index 0000000..e81b9a1
--- /dev/null
+++ b/tls_certificates/cacerts.pem
@@ -0,0 +1,32 @@
+-----BEGIN CERTIFICATE-----
+MIICtTCCAhagAwIBAgIUbXo95pRftNs58WoWu+7OdBDgAiMwCgYIKoZIzj0EAwQw
+TjELMAkGA1UEBhMCSU0xFDASBgNVBAcTC0lzbGUgb2YgTWFuMRIwEAYDVQQKEwlD
+YW5vbmljYWwxFTATBgNVBAMTDEZha2UgUm9vdCBDQTAeFw0yMjA1MTgxNDEyMDBa
+Fw0zMjA1MTUxNDEyMDBaMFYxCzAJBgNVBAYTAklNMRQwEgYDVQQHEwtJc2xlIG9m
+IE1hbjESMBAGA1UEChMJQ2Fub25pY2FsMR0wGwYDVQQDExRGYWtlIEludGVybWVk
+aWF0ZSBDQTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEARcrHk0Zmv7A12ujZbnA
+uXoZauoF0s6y+matRfy7ga+kiHGomv1CdM6fwqQOQfVfoiRh/rtsB+HC1futZeee
+JsysAMrTnhOYzNayvDmGPNdmif3db4uVnzAA6QRMUqJnLzvr9C+bCB/MRnO2pFbg
+Q4ei0CpdbI55R3Hb5tD9+gfhD3Z3o4GGMIGDMA4GA1UdDwEB/wQEAwIBpjAdBgNV
+HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwEgYDVR0TAQH/BAgwBgEB/wIBADAd
+BgNVHQ4EFgQUVY8jJQncQXPNnbMaPZHVeURb0vowHwYDVR0jBBgwFoAUi7tAJjzP
+/Fymeg1IgwUjfn4yxmEwCgYIKoZIzj0EAwQDgYwAMIGIAkIAzRZUcybryueko/yt
+aNNuuK3RUUfxfR8/z5ixGEygIVpLvIKo6OQTqOKD7wJWitEZOsuIOzGApiIQeHKx
+TvqDU10CQgGYl4nWTK9wF0wUyenUEvWPGt4LS3QP89paRMw8LZwEZYM/uIGwrGMF
+lAkc0Rv/ielU//7zkGb71I4s2/wds+5tzQ==
+-----END CERTIFICATE-----
+-----BEGIN CERTIFICATE-----
+MIICZzCCAcmgAwIBAgIUFTKA+v3C7uP1/TPfRagkxZFVwmIwCgYIKoZIzj0EAwQw
+TjELMAkGA1UEBhMCSU0xFDASBgNVBAcTC0lzbGUgb2YgTWFuMRIwEAYDVQQKEwlD
+YW5vbmljYWwxFTATBgNVBAMTDEZha2UgUm9vdCBDQTAeFw0yMjA1MTgxNDExMDBa
+Fw0yNzA1MTcxNDExMDBaME4xCzAJBgNVBAYTAklNMRQwEgYDVQQHEwtJc2xlIG9m
+IE1hbjESMBAGA1UEChMJQ2Fub25pY2FsMRUwEwYDVQQDEwxGYWtlIFJvb3QgQ0Ew
+gZswEAYHKoZIzj0CAQYFK4EEACMDgYYABADKVk6kn2iECNzLHWemZJl8mPW3cTGW
+y08gyRbCpm/EcRq1pTii+/Nj7aza2a2tGMbTgQyTO2mSNl8X/3Slbr0CegBzPQL6
+9xLBu7Q7WRu9ONwkamugKPXnwZutqxUp9ZDheRN8UHsP6qjd6d6p8WWrMUaV1OTL
+qYJtJqd08XKlu4rSX6NCMEAwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMB
+Af8wHQYDVR0OBBYEFIu7QCY8z/xcpnoNSIMFI35+MsZhMAoGCCqGSM49BAMEA4GL
+ADCBhwJCAfKv8dnoqH+4fu1wrkanrYsH/2ZJ19Lbts1klecJ2esOtOZr7saCduy1
+AxWhadXKBAdHnsqDEqNG20sutIskP/CBAkE3sbic7u52Bn72DL2y+60Tkl5I0GkA
+kq0X2AuNrG2SLUju9axtD7JE7x5pGYoZZZirNjKLpKNDxEs98Afhl9A3PQ==
+-----END CERTIFICATE-----
diff --git a/tls_certificates/server-maas-under-test-key.pem b/tls_certificates/server-maas-under-test-key.pem
new file mode 100644
index 0000000..5e02284
--- /dev/null
+++ b/tls_certificates/server-maas-under-test-key.pem
@@ -0,0 +1,27 @@
+-----BEGIN RSA PRIVATE KEY-----
+MIIEpAIBAAKCAQEAtTrCVCusMFBIHAMHcDnmj0FgeIEwoYBHjk4I1Cbgmex0Ma3m
+PQKCB/50qcxf+BSpwXK75QXl9AcVmvX3Sn34FS362RL36oMFGCE5KQnO8uydK32l
+p4fM6rOpiK2Ns2jFBZ6mZxF333uPpODVLfqIF5Mk5AEFzfhbDSxzEXSorIc2Venx
+VNkWyWaeHpMUrKVbwsk4NPH52L3L35+OPvyfHO2V3eG5QyiWi9y8OYwNBWNmjLBZ
+4VxRxxqLOvje49Bj9HF/6LCp51JJoYu7Ug50GYwgfZiDS98dSioAKsviPs4it6ir
+rNaXqgv/maQTAnmMYJp308mHMekBQz0bWorY2QIDAQABAoIBAGBmsiot1PkaK1Fj
+Nxi2Y/M99oADUIgIAYgr8DxRtdWK1r/6XeeEJwDzlMEhqsb+ztHNIy+PNKPbBN4a
+CoIAge9aNv4zPdbr/NC6E3rF8eR8gpo4yt5TuWf7S6odj6uohm0X2DIpM5eYVW+B
+/UPo6W2I4u25sYm/m0dlpovZf0PN/yybeP/5JFgGakv8piJjVZ7oEAIzV9D+8b9P
+C1N77D8IgxlwuJ2Ob9Abgj4TdtYJX+NEJ9repW7bhajRml/Rs3FhTF5M+z3Yc+9X
+fKX0RaJZKRjtdbktGmks55Gg6PrAdCTfwV7krQUdGDsBExQ/iW0ISzsEZt5++H6G
+AcBHsaUCgYEA5iKvaWfQ2dGdBNfx9wWkPR0obc/HeTF3xEPhkVYqMGp8flHqq3ss
+60kddPTmh7Jff/D2OxfS+T7E5KAlcMyAJl0w0OwrpMaueLCfOPSBhzup1vxhfSto
+1rm0XU+HKFPhRavOzskKs9BVKgb3xTL6zTCe6swOZg6I7iHoRY0VL98CgYEAyZj7
+aNhb/OUk7UZQYYy0O9Xemd4jse7M7kdpV7jWi/YEmVAtDqGISxghetJ8yY5Rw5NK
+AcXpmf7m+U/CXMlrwSuQGPKvJTob9sashLvvuiBUN6yYj+wGEh2GXc1ZXx5PwEl0
+uiDmpJLP9gXeFacdxYkLyQvCBLhD0TRVIlz/rkcCgYAqn33xfcLWtNXqEbzEzYyv
+rPjR7cu6DIlsFk5uxpClyvMnyjA2dmfJZA9KnBkeRNEfNxfDthPjCdcZqPeGPrn1
+YQkriLJEoG+r9rpmqBJdY5V/NdswfZu7OUXIinQz6eUtLDbvYZjT2OANGqFFKr38
+xuaIAicgi8ycnjcQuqKT7QKBgQClZe826PQnu7SdO2gtcKxavzBf20I79OmLwWkr
+QIo90H2bb41YCK1ytvyY8WLSVwK8S/aXF9J9twW3nHmheNwAY4ZZAZszFsbko8Hd
+MPgRI/8UonWU9xdP+4tHIHhnss3JvDqZju7MLWuTtOKtryuc6sCRlST8jFWPqbkD
+dXuMdwKBgQCMxrGTNSdA0jjYiosIsV3HJOahJyqTT41cm8JYSxCfO61k7JPruDTB
+w3bMzKTb03lGVZ5in4nWENkErihmYLVOzMl1aJRYQnO5hEX5iWCS31/y9GLdqWJN
+TVpXNo7bf5wnGxvHU6EkKtc6FiKygISMhv0U+Oi963TmWcVIAi2Z7g==
+-----END RSA PRIVATE KEY-----
diff --git a/tls_certificates/server-maas-under-test.pem b/tls_certificates/server-maas-under-test.pem
new file mode 100644
index 0000000..f611a86
--- /dev/null
+++ b/tls_certificates/server-maas-under-test.pem
@@ -0,0 +1,19 @@
+-----BEGIN CERTIFICATE-----
+MIIDJjCCAoegAwIBAgIUSNdrwTNCk/lKDRrqZ798m2YIoukwCgYIKoZIzj0EAwQw
+VjELMAkGA1UEBhMCSU0xFDASBgNVBAcTC0lzbGUgb2YgTWFuMRIwEAYDVQQKEwlD
+YW5vbmljYWwxHTAbBgNVBAMTFEZha2UgSW50ZXJtZWRpYXRlIENBMCAXDTIyMDUx
+ODE0MTMwMFoYDzIxMjIwNDI0MTQxMzAwWjApMQ0wCwYDVQQKEwRNQUFTMRgwFgYD
+VQQDEw9NQUFTIFVuZGVyIFRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
+AoIBAQC1OsJUK6wwUEgcAwdwOeaPQWB4gTChgEeOTgjUJuCZ7HQxreY9AoIH/nSp
+zF/4FKnBcrvlBeX0BxWa9fdKffgVLfrZEvfqgwUYITkpCc7y7J0rfaWnh8zqs6mI
+rY2zaMUFnqZnEXffe4+k4NUt+ogXkyTkAQXN+FsNLHMRdKishzZV6fFU2RbJZp4e
+kxSspVvCyTg08fnYvcvfn44+/J8c7ZXd4blDKJaL3Lw5jA0FY2aMsFnhXFHHGos6
++N7j0GP0cX/osKnnUkmhi7tSDnQZjCB9mINL3x1KKgAqy+I+ziK3qKus1peqC/+Z
+pBMCeYxgmnfTyYcx6QFDPRtaitjZAgMBAAGjgZIwgY8wDgYDVR0PAQH/BAQDAgWg
+MBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFDM/
+FceZNoW/CgV89JNE0hczBl16MB8GA1UdIwQYMBaAFFWPIyUJ3EFzzZ2zGj2R1XlE
+W9L6MBoGA1UdEQQTMBGCD21hYXMudW5kZXIudGVzdDAKBggqhkjOPQQDBAOBjAAw
+gYgCQgFQSVsKwqDBRVo3l9seL12QG9r/9xn+TnxC2QIYYOkUubUy0csKdOYYcuff
+OFmChi7ZEDTMFU2CsfTGIPzW5FVgJAJCAYbZ+Moqpukl/vqaig2blXzotobeg0p8
+bT5yi0KuQnbcsSkwPmvl/41V5UwExkcyiEIBMYMkBX+fY39FYlfR4CuL
+-----END CERTIFICATE-----
diff --git a/tox.ini b/tox.ini
new file mode 100644
index 0000000..aa171e3
--- /dev/null
+++ b/tox.ini
@@ -0,0 +1,103 @@
+[tox]
+minversion = 3.15
+envlist = cog,lint,mypy,env_builder
+
+[testenv]
+commands=
+  pytest --pyargs systemtests -k {envname} --log-file systemtests.{envname}.log --junit-xml=junit.{envname}.xml {posargs}
+setenv =
+  env_builder: PYTEST_ADDOPTS=--maxfail=1
+#[[[cog
+#import cog
+#import sys
+## Used further down the file
+#if sys.version_info < (3, 9):
+#    print("Detected Python < 3.9; forcing basepython to 3.9")
+#    cog.outl("basepython = python3.9")
+#else:
+#    print("Detected Python >= 3.9, no basepython needed")
+#]]]
+#[[[end]]]
+
+[base]
+passenv =
+  MAAS_SYSTEMTESTS_BUILD_CONTAINER
+  MAAS_SYSTEMTESTS_MAAS_CONTAINER
+  MAAS_SYSTEMTESTS_CLIENT_CONTAINER
+  MAAS_SYSTEMTESTS_LXD_PROFILE
+
+[testenv:{env_builder,collect_sos_report,general_tests}]
+passenv = {[base]passenv}
+
+[testenv:cog]
+description=Generate tox.ini config for machines in config.yaml, update README
+deps=
+  cogapp
+  PyYAML
+commands=
+  cog --verbosity=0 -r tox.ini README.md
+
+#[[[cog
+#from pathlib import Path
+#import yaml
+#config_yaml = Path("config.yaml")
+#if config_yaml.exists():
+#    config = yaml.safe_load(config_yaml.open())
+#else:
+#    config = {}
+#machines = list(config.get("machines", {}).get("hardware", {}).keys())
+#machines += list(config.get("machines", {}).get("vms", {}).get("instances", {}).keys())
+#if machines:
+#    print("Generating config for {}.".format(', '.join(machines)))
+#    cog.outl(f"""
+#    [testenv:{{{','.join(machines)}}}]
+#    description=Per-machine tests for {{envname}}
+#    passenv = {{[base]passenv}}
+#    """)
+#]]]
+#[[[end]]]
+
+[testenv:format]
+description=Reformat Python code and README.md
+deps= -rrequirements.txt
+commands=
+  isort --profile black systemtests utils
+  black systemtests utils
+  cog -r README.md
+
+[testenv:lint]
+description=Lint Python code, YAML and README.md
+deps= -rrequirements.txt
+whitelist_externals=sh
+commands=
+  isort --profile black --check-only systemtests utils
+  black --check systemtests utils
+  cog --verbosity=0 --check README.md
+  flake8 systemtests utils
+  sh -c 'git ls-files \*.yaml\* | xargs yamllint'
+
+[testenv:mypy]
+description=Check Python code for type violations
+# Tox in Focal doesn't pass these by default :(
+passenv =
+  http_proxy
+  https_proxy
+setenv =
+  MYPYPATH=stubs
+deps=
+  mypy
+commands=
+  mypy -p systemtests -p utils
+
+[testenv:generate_config]
+description=Generate config.yaml
+deps=
+  ruamel.yaml
+commands=python utils/gen_config.py {posargs}
+
+[testenv:filter_envs]
+description=Filter environments by valid tox ones.
+commands=python utils/filter_envs.py {posargs}
+
+[flake8]
+max-line-length = 88
diff --git a/utils/__init__.py b/utils/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/utils/__init__.py
diff --git a/utils/filter_envs.py b/utils/filter_envs.py
new file mode 100644
index 0000000..df3976a
--- /dev/null
+++ b/utils/filter_envs.py
@@ -0,0 +1,27 @@
+import argparse
+import subprocess
+import sys
+
+
+def main(argv: list[str]) -> int:
+
+    parser = argparse.ArgumentParser()
+    parser.add_argument(
+        "desired_envs",
+        type=str,
+        help="list of desired envs separated by ',' (ie: vm1,vm2,opelt)",
+    )
+
+    args = parser.parse_args(argv)
+
+    valid_envs_raw = subprocess.check_output(["tox", "-a"], encoding="utf8")
+    valid_envs = set(valid_envs_raw.splitlines())
+
+    desired_envs = set(args.desired_envs.split(","))
+
+    print(",".join(desired_envs & valid_envs))
+    return 0
+
+
+if __name__ == "__main__":
+    sys.exit(main(sys.argv[1:]))
diff --git a/utils/gen_config.py b/utils/gen_config.py
new file mode 100755
index 0000000..98ed2ad
--- /dev/null
+++ b/utils/gen_config.py
@@ -0,0 +1,217 @@
+#!/usr/bin/env python3
+
+import argparse
+import pathlib
+import sys
+from typing import Any
+
+from ruamel.yaml import YAML
+
+yaml = YAML()
+
+
+def main(argv: list[str]) -> int:
+
+    parser = argparse.ArgumentParser()
+    parser.add_argument(
+        "base_config",
+        type=pathlib.Path,
+        help="Path to full config.yaml file for this environment",
+    )
+    parser.add_argument(
+        "output_config",
+        type=pathlib.Path,
+        help="Path for filtered config.yaml for this test",
+    )
+    installation_method = parser.add_mutually_exclusive_group(required=True)
+    installation_method.add_argument(
+        "--deb", action="store_true", help="Build and install MAAS via deb package"
+    )
+    installation_method.add_argument(
+        "--snap", action="store_true", help="Install MAAS via snap"
+    )
+
+    snap_group = parser.add_argument_group("snap", "Install MAAS via snap")
+    snap_group.add_argument(
+        "--snap-channel",
+        type=str,
+        metavar="CHANNEL",
+        help="Install MAAS via snap using this channel",
+    )
+    snap_group.add_argument(
+        "--test-db-channel",
+        type=str,
+        metavar="CHANNEL",
+        help="Install maas-test-db via snap using this channel",
+    )
+    deb_group = parser.add_argument_group(
+        "deb", "Build and install MAAS via deb package"
+    )
+    deb_group.add_argument(
+        "--ppa",
+        action="append",
+        help="Add this PPA to build and MAAS containers; can be repeated",
+    )
+    deb_group.add_argument(
+        "--git-repo",
+        type=str,
+        help="Which git repository to use to get MAAS from",
+    )
+    deb_group.add_argument(
+        "--git-branch", type=str, help="Which git branch use to get MAAS"
+    )
+    enabled_features_group = parser.add_argument_group(
+        "features", "Set up MAAS with these features enabled"
+    )
+    enabled_features_group.add_argument(
+        "--tls",
+        action="store_true",
+        help="Enable TLS for this test",
+    )
+    enabled_features_group.add_argument(
+        "--o11y",
+        action="store_true",
+        help="Enable observability for this test",
+    )
+    enabled_features_group.add_argument(
+        "--hw-sync",
+        action="store_true",
+        help="Add hw_sync tests for this test",
+    )
+    filter_machines_group = parser.add_argument_group(
+        "filters", "Use this option for filter machines to test"
+    )
+    filter_machines_group.add_argument(
+        "--architecture",
+        metavar="ARCH",
+        action="append",
+        help="Only run tests for machines with this architecture; can be repeated",
+    )
+    filter_machines_group.add_argument(
+        "--machine",
+        action="append",
+        metavar="NAME",
+        help="Only run tests for this machine; can be repeated",
+    )
+    filter_machines_group.add_argument(
+        "--vm-machine",
+        action="append",
+        metavar="NAME",
+        help="Only run tests for this VM; can be repeated",
+    )
+    parser.add_argument(
+        "--containers-image",
+        type=str,
+        metavar="IMAGE",
+        help="Use this image for containers tests",
+    )
+
+    args = parser.parse_args(argv)
+
+    with open(args.base_config, "r") as fh:
+        config: dict[str, Any] = yaml.load(fh)
+
+    if args.containers_image:
+        config["containers-image"] = args.containers_image
+    else:
+        if "containers-image" not in config:
+            parser.error(
+                "containers-image is required but not present in base_config "
+                "or in --containers-image option"
+            )
+
+    if args.tls:
+        if "tls" not in config:
+            parser.error("TlS section required but not present in base_config")
+    else:
+        config.pop("tls", None)
+
+    if args.o11y:
+        if "o11y" not in config:
+            parser.error(
+                "Observability section required but not present in base_config"
+            )
+    else:
+        config.pop("o11y", None)
+
+    if args.snap:
+        config.pop("deb", None)  # deb could be present in base_config
+        if not all(v is None for v in (args.ppa, args.git_repo, args.git_branch)):
+            parser.error("snap mode is selected, so deb options are not available")
+
+        if args.snap_channel is None:
+            parser.error("When snap mode is selected, --snap-channel is mandatory")
+
+        config["snap"] = {"maas_channel": args.snap_channel}
+        if args.test_db_channel:
+            config["snap"]["test_db_channel"] = args.test_db_channel
+
+    if args.deb:
+        config.pop("snap", None)  # snap could be present in base_config
+        config["deb"] = config.get("deb", {})
+
+        if not all(v is None for v in (args.snap_channel, args.test_db_channel)):
+            parser.error("deb mode is selected, so snap options are not available")
+
+        if args.ppa:
+            config["deb"]["ppa"] = args.ppa  # This should be a list
+        if args.git_repo:
+            config["deb"]["git_repo"] = args.git_repo
+        if args.git_branch:
+            config["deb"]["git_branch"] = args.git_branch
+
+    machines = config.get("machines", {})
+    vms = machines.get("vms", {})
+    hardware = machines.get("hardware", {})
+    if vms:
+        vms_with_devices = [
+            vm_name
+            for vm_name, vm_config in vms["instances"].items()
+            if vm_config.get("devices")
+        ]
+    else:
+        vms_with_devices = []
+
+    if args.hw_sync:
+        if not vms_with_devices:
+            parser.error("There are no VMs valid for hw_sync in base_config")
+    else:
+        # Drop out VMs with devices.
+        for vm_name in vms_with_devices:
+            vms["instances"].pop(vm_name)
+
+    if args.architecture:
+        if hardware:
+            # Drop out machines with architectures distinct of specified ones.
+            machines["hardware"] = {
+                name: details
+                for name, details in hardware.items()
+                if details.get("architecture", "amd64") in args.architecture
+            }
+
+    if args.machine:
+        if hardware:
+            # Drop out machines not listed in specified machines.
+            machines["hardware"] = {
+                name: details
+                for name, details in hardware.items()
+                if name not in args.machine
+            }
+
+    if args.vm_machine:
+        # Drop out VMs with name not listed in specified vm_machines
+        if vms:
+            vms["instances"] = {
+                vm_name: vm_config
+                for vm_name, vm_config in vms["instances"].items()
+                if vm_name in args.vm_machine
+            }
+
+    with open(args.output_config, "w") as fh:
+        yaml.dump(config, fh)
+
+    return 0
+
+
+if __name__ == "__main__":
+    sys.exit(main(sys.argv[1:]))

References