← Back to team overview

bind-charmers team mailing list archive

[Merge] ~bind-charmers/charm-k8s-bind/+git/charmcraft-review:review into ~bind-charmers/charm-k8s-bind/+git/charmcraft-review:master

 

Tom Haddon has proposed merging ~bind-charmers/charm-k8s-bind/+git/charmcraft-review:review into ~bind-charmers/charm-k8s-bind/+git/charmcraft-review:master.

Commit message:
Bind charm updates for charmcraft review

Requested reviews:
  Operator Framework Hackers (Internal) (charmcrafters)
  Bind Charmers (bind-charmers)

For more details, see:
https://code.launchpad.net/~bind-charmers/charm-k8s-bind/+git/charmcraft-review/+merge/389995

Bind charm updates for charmcraft review.

This is a temporary branch/MP created to make charm review easier. Created by running `charmcraft init` and then merging the existing main branch into that.

We'll take any changes from here and merge them back into the main branch.
-- 
Your team Bind Charmers is requested to review the proposed merge of ~bind-charmers/charm-k8s-bind/+git/charmcraft-review:review into ~bind-charmers/charm-k8s-bind/+git/charmcraft-review:master.
diff --git a/.jujuignore b/.jujuignore
index 6ccd559..d968a07 100644
--- a/.jujuignore
+++ b/.jujuignore
@@ -1,3 +1,7 @@
-/venv
-*.py[cod]
-*.charm
+*~
+.coverage
+__pycache__
+/dockerfile
+/image-scripts/
+/tests/
+/Makefile
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..66fc8ed
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,46 @@
+DIST_RELEASE ?= focal
+DOCKER_DEPS = bind9 bind9-dnsutils git
+
+blacken:
+	@echo "Normalising python layout with black."
+	@tox -e black
+
+
+lint: blacken
+	@echo "Running flake8"
+	@tox -e lint
+
+# We actually use the build directory created by charmcraft,
+# but the .charm file makes a much more convenient sentinel.
+unittest: bind.charm
+	@tox -e unit
+
+test: lint unittest
+
+clean:
+	@echo "Cleaning files"
+	@git clean -fXd
+
+bind.charm: src/*.py requirements.txt
+	charmcraft build
+
+image-deps:
+	@echo "Checking shellcheck is present."
+	@command -v shellcheck >/dev/null || { echo "Please install shellcheck to continue ('sudo snap install shellcheck')" && false; }
+
+image-lint: image-deps
+	@echo "Running shellcheck."
+	@shellcheck files/docker-entrypoint.sh
+	@shellcheck files/dns-check.sh
+
+image-build: image-lint
+	@echo "Building the image."
+	@docker build \
+		--no-cache=true \
+		--build-arg BUILD_DATE=$$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
+		--build-arg PKGS_TO_INSTALL='$(DOCKER_DEPS)' \
+		--build-arg DIST_RELEASE=$(DIST_RELEASE) \
+		-t bind:$(DIST_RELEASE)-latest \
+		.
+
+.PHONY: blacken lint unittest test clean image-deps image-lint image-build
diff --git a/README.md b/README.md
index 25fdae7..a5ba4ae 100644
--- a/README.md
+++ b/README.md
@@ -1,28 +1,56 @@
-# charmcraft-review
+# Bind charm
 
-## Description
+A Juju charm deploying Bind, configurable to use a git repository for its configuration files.
 
-TODO: fill out the description
+## Overview
 
-## Usage
+This is a k8s workload charm and can only be deployed to a Juju k8s cloud,
+attached to a controller using `juju add-k8s`.
 
-TODO: explain how to use the charm
+This charm is not currently ready for production due to issues with providing
+an egress to route TCP and UDP traffic to the pods. See:
 
-### Scale Out Usage
+https://bugs.launchpad.net/charm-k8s-bind/+bug/1889746
 
-...
+https://bugs.launchpad.net/juju/+bug/1889703
 
-## Developing
+## Details
 
-Create and activate a virtualenv,
-and install the development requirements,
+See config option descriptions in config.yaml.
 
-    virtualenv -p python3 venv
-    source venv/bin/activate
-    pip install -r requirements-dev.txt
+## Getting Started
 
-## Testing
+Notes for deploying a test setup locally using microk8s:
 
-Just run `run_tests`:
+    sudo snap install juju --classic
+    sudo snap install juju-wait --classic
+    sudo snap install microk8s --classic
+    sudo snap alias microk8s.kubectl kubectl
+    sudo snap install charmcraft
+    git clone https://git.launchpad.net/charm-k8s-bind
+    make bind.charm
 
-    ./run_tests
+    microk8s.reset  # Warning! Clean slate!
+    microk8s.enable dns dashboard registry storage
+    microk8s.status --wait-ready
+    microk8s.config | juju add-k8s myk8s --client
+
+    # Build your Bind image
+    make build-image
+    docker push localhost:32000/bind
+    
+    juju bootstrap myk8s
+    juju add-model bind-test
+    juju deploy ./bind.charm --config bind_image_path=localhost:32000/bind:latest bind
+    juju wait
+    juju status
+
+Assuming you're using the image as built locally from this repo, the charm will
+deploy bind with its stock Ubuntu package configuration, which will forward all
+queries to root name servers.
+
+DNSSEC is also enabled by default.
+
+Custom config can be deployed by setting the `custom_config_repo` option to
+point to a Git repository containing a valid set of configuration files with
+which to populate the /etc/bind/ directory within the pod(s).
diff --git a/config.yaml b/config.yaml
index 0073c17..408ae9d 100644
--- a/config.yaml
+++ b/config.yaml
@@ -1,10 +1,46 @@
-# Copyright 2020 Tom Haddon
-# See LICENSE file for licensing details.
-#
-# This is only an example, and you should edit to suit your needs.
-# If you don't need config, you can remove the file entirely.
 options:
-  thing:
-    default: 🎁
-    description: A thing used by the charm.
+  bind_image_path:
     type: string
+    description: |
+        The location of the image to use, e.g. "registry.example.com/bind:v1".
+
+        This setting is required.
+    default: ""
+  bind_image_username:
+    type: string
+    description: "Username to use for the configured image registry, if required"
+    default: ""
+  bind_image_password:
+    type: string
+    description: "Password to use for the configured image registry, if required"
+    default: ""
+  container_config:
+    type: string
+    description: >
+      YAML formatted map of container config keys & values. These are
+      generally accessed from inside the image as environment variables.
+      Use to configure customized Wordpress images. This configuration
+      gets logged; use container_secrets for secrets.
+    default: ""
+  container_secrets:
+    type: string
+    description: >
+      YAML formatted map of secrets. Works just like container_config,
+      except that values should not be logged.
+    default: ""
+  custom_config_repo:
+    type: string
+    description: |
+      Repository from which to populate /etc/bind/.
+      If unset, bind will be deployed with the package defaults.
+      e.g. http://github.com/foo/my-custom-bind-config
+    default: ""
+  https_proxy:
+    type: string
+    description: |
+      Proxy address to set in the environment, e.g. http://192.168.1.1:8080
+      Used to clone the configuration files from custom_config_repo, if set.
+      If a username/password is required, they can be embedded in the proxy 
+      address e.g. http://username:password@192.168.1.1:8080
+      Traffic is expected to be HTTPS, but this will also work for HTTP.
+    default: ""
diff --git a/dockerfile b/dockerfile
new file mode 100644
index 0000000..2e580d4
--- /dev/null
+++ b/dockerfile
@@ -0,0 +1,33 @@
+ARG DIST_RELEASE
+
+FROM ubuntu:${DIST_RELEASE}
+
+LABEL maintainer="bind-charmers@xxxxxxxxxxxxxxxxxxx"
+
+ARG BUILD_DATE
+ARG PKGS_TO_INSTALL
+
+LABEL org.label-schema.build-date=${BUILD_DATE}
+
+ENV BIND_CONFDIR=/etc/bind
+
+# Avoid interactive prompts
+RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
+
+# Update all packages, remove cruft, install required packages
+RUN apt-get update && apt-get -y dist-upgrade \
+    && apt-get --purge autoremove -y \
+    && apt-get install -y ${PKGS_TO_INSTALL}
+
+# entrypoint script will configure Bind based on env variables
+# dns-check script will provide a readinessProbe
+COPY ./files/docker-entrypoint.sh /usr/local/bin/
+COPY ./files/dns-check.sh /usr/local/bin/
+RUN chmod 0755 /usr/local/bin/docker-entrypoint.sh
+RUN chmod 0755 /usr/local/bin/dns-check.sh
+
+EXPOSE 53/udp
+EXPOSE 53/tcp
+
+ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
+CMD /usr/sbin/named -g -u bind -c /etc/bind/named.conf
diff --git a/image-scripts/dns-check.sh b/image-scripts/dns-check.sh
new file mode 100644
index 0000000..ca4a5a0
--- /dev/null
+++ b/image-scripts/dns-check.sh
@@ -0,0 +1,10 @@
+#!/bin/bash
+set -eu
+
+TEST_DOMAIN="ddebs.ubuntu.com."
+NSLOOKUP_PATH="/usr/bin/nslookup"
+OUR_ADDRESS="127.0.0.1"
+
+command -v "${NSLOOKUP_PATH}" >/dev/null || { echo "Cannot find the 'nslookup' command" && exit 1; }
+
+exec "${NSLOOKUP_PATH}" "${TEST_DOMAIN}" "${OUR_ADDRESS}" >/dev/null
diff --git a/image-scripts/docker-entrypoint.sh b/image-scripts/docker-entrypoint.sh
new file mode 100644
index 0000000..2ed0857
--- /dev/null
+++ b/image-scripts/docker-entrypoint.sh
@@ -0,0 +1,23 @@
+#!/bin/bash
+set -eu
+
+if [ -z "${BIND_CONFDIR-}" ]; then
+	# If BIND_CONFDIR wasn't set, use the package default
+	BIND_CONFDIR="/etc/bind";
+fi
+
+if [ -z "${CUSTOM_CONFIG_REPO-}" ]; then
+	echo "No custom repo set, will fall back to package default config";
+else
+	echo "Pulling config from $CUSTOM_CONFIG_REPO";
+	if [ -d "${BIND_CONFDIR}" ]; then
+		mv "${BIND_CONFDIR}" "${BIND_CONFDIR}_$(date +"%Y-%m-%d_%H-%M-%S")";
+	fi
+	git clone "$CUSTOM_CONFIG_REPO" "$BIND_CONFDIR";
+fi
+
+if [ -d "${BIND_CONFDIR}" ]; then
+	exec "$@"
+else
+	echo "Something went wrong, ${BIND_CONFDIR} does not exist, not starting";
+fi
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 0000000..d2f23b9
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,3 @@
+[tool.black]
+skip-string-normalization = true
+line-length = 120
diff --git a/src/charm.py b/src/charm.py
index 640fea5..913f69f 100755
--- a/src/charm.py
+++ b/src/charm.py
@@ -1,38 +1,147 @@
 #!/usr/bin/env python3
-# Copyright 2020 Tom Haddon
-# See LICENSE file for licensing details.
 
-import logging
+# Copyright 2020 Canonical Ltd.
+# Licensed under the GPLv3, see LICENCE file for details.
 
+import io
+import logging
 from ops.charm import CharmBase
 from ops.main import main
-from ops.framework import StoredState
+from ops.model import ActiveStatus, MaintenanceStatus
+from pprint import pprint
+from yaml import safe_load
 
-logger = logging.getLogger(__name__)
+logger = logging.getLogger()
 
+REQUIRED_SETTINGS = ['bind_image_path']
 
-class CharmcraftReviewCharm(CharmBase):
-    _stored = StoredState()
 
+class BindK8sCharm(CharmBase):
     def __init__(self, *args):
+        """Initialise our class, we only care about 'start' and 'config-changed' hooks."""
         super().__init__(*args)
-        self.framework.observe(self.on.config_changed, self._on_config_changed)
-        self.framework.observe(self.on.fortune_action, self._on_fortune_action)
-        self._stored.set_default(things=[])
-
-    def _on_config_changed(self, _):
-        current = self.model.config["thing"]
-        if current not in self._stored.things:
-            logger.debug("found a new thing: %r", current)
-            self._stored.things.append(current)
-
-    def _on_fortune_action(self, event):
-        fail = event.params["fail"]
-        if fail:
-            event.fail(fail)
+        self.framework.observe(self.on.start, self.on_config_changed)
+        self.framework.observe(self.on.config_changed, self.on_config_changed)
+
+    def _check_for_config_problems(self):
+        """Check for some simple configuration problems and return a
+        string describing them, otherwise return an empty string."""
+        problems = []
+
+        missing = self._missing_charm_settings()
+        if missing:
+            problems.append('required setting(s) empty: {}'.format(', '.join(sorted(missing))))
+
+        return '; '.join(filter(None, problems))
+
+    def _missing_charm_settings(self):
+        """Check configuration setting dependencies and return a list of
+        missing settings; otherwise return an empty list."""
+        config = self.model.config
+        missing = []
+
+        missing.extend([setting for setting in REQUIRED_SETTINGS if not config[setting]])
+
+        if config['bind_image_username'] and not config['bind_image_password']:
+            missing.append('bind_image_password')
+
+        return sorted(list(set(missing)))
+
+    def on_config_changed(self, event):
+        """Check that we're leader, and if so, set up the pod."""
+        if self.model.unit.is_leader():
+            # Only the leader can set_spec().
+            resources = self.make_pod_resources()
+            spec = self.make_pod_spec()
+            spec.update(resources)
+
+            msg = "Configuring pod"
+            logger.info(msg)
+            self.model.unit.status = MaintenanceStatus(msg)
+            self.model.pod.set_spec(spec)
+
+            msg = "Pod configured"
+            logger.info(msg)
+            self.model.unit.status = ActiveStatus(msg)
+        else:
+            logger.info("Spec changes ignored by non-leader")
+            self.model.unit.status = ActiveStatus()
+
+    def make_pod_resources(self):
+        """Compile and return our pod resources (e.g. ingresses)."""
+        # LP#1889746: We need to define a manual ingress here to work around LP#1889703.
+        resources = {}  # TODO
+        out = io.StringIO()
+        pprint(resources, out)
+        logger.info("This is the Kubernetes Pod resources <<EOM\n{}\nEOM".format(out.getvalue()))
+        return resources
+
+    def generate_pod_config(self, secured=True):
+        """Kubernetes pod config generator.
+
+        generate_pod_config generates Kubernetes deployment config.
+        If the secured keyword is set then it will return a sanitised copy
+        without exposing secrets.
+        """
+        config = self.model.config
+        pod_config = {}
+        if config["container_config"].strip():
+            pod_config = safe_load(config["container_config"])
+
+        if config["custom_config_repo"].strip():
+            pod_config["CUSTOM_CONFIG_REPO"] = config["custom_config_repo"]
+
+        if config["https_proxy"].strip():
+            pod_config["http_proxy"] = config["https_proxy"]
+            pod_config["https_proxy"] = config["https_proxy"]
+
+        if secured:
+            return pod_config
+
+        if config["container_secrets"].strip():
+            container_secrets = safe_load(config["container_secrets"])
         else:
-            event.set_results({"fortune": "A bug in the code is worth two in the documentation."})
+            container_secrets = {}
+
+        pod_config.update(container_secrets)
+        return pod_config
+
+    def make_pod_spec(self):
+        """Set up and return our full pod spec here."""
+        config = self.model.config
+        full_pod_config = self.generate_pod_config(secured=False)
+        secure_pod_config = self.generate_pod_config(secured=True)
+
+        ports = [
+            {"name": "domain-tcp", "containerPort": 53, "protocol": "TCP"},
+            {"name": "domain-udp", "containerPort": 53, "protocol": "UDP"},
+        ]
+
+        spec = {
+            "version": 2,
+            "containers": [
+                {
+                    "name": self.app.name,
+                    "imageDetails": {"imagePath": config["bind_image_path"]},
+                    "ports": ports,
+                    "config": secure_pod_config,
+                    "kubernetes": {"readinessProbe": {"exec": {"command": ["/usr/local/bin/dns-check.sh"]}}},
+                }
+            ],
+        }
+
+        out = io.StringIO()
+        pprint(spec, out)
+        logger.info("This is the Kubernetes Pod spec config (sans secrets) <<EOM\n{}\nEOM".format(out.getvalue()))
+
+        if config.get("bind_image_username") and config.get("bind_image_password"):
+            spec.get("containers")[0].get("imageDetails")["username"] = config["bind_image_username"]
+            spec.get("containers")[0].get("imageDetails")["password"] = config["bind_image_password"]
+
+        secure_pod_config.update(full_pod_config)
+
+        return spec
 
 
 if __name__ == "__main__":
-    main(CharmcraftReviewCharm)
+    main(BindK8sCharm)
diff --git a/tests/__init__.py b/tests/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tests/__init__.py
+++ /dev/null
diff --git a/tests/test_charm.py b/tests/test_charm.py
deleted file mode 100644
index 629f0ea..0000000
--- a/tests/test_charm.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# Copyright 2020 Tom Haddon
-# See LICENSE file for licensing details.
-
-import unittest
-from unittest.mock import Mock
-
-from ops.testing import Harness
-from charm import CharmcraftReviewCharm
-
-
-class TestCharm(unittest.TestCase):
-    def test_config_changed(self):
-        harness = Harness(CharmcraftReviewCharm)
-        # from 0.8 you should also do:
-        # self.addCleanup(harness.cleanup)
-        harness.begin()
-        self.assertEqual(list(harness.charm._stored.things), [])
-        harness.update_config({"thing": "foo"})
-        self.assertEqual(list(harness.charm._stored.things), ["foo"])
-
-    def test_action(self):
-        harness = Harness(CharmcraftReviewCharm)
-        harness.begin()
-        # the harness doesn't (yet!) help much with actions themselves
-        action_event = Mock(params={"fail": ""})
-        harness.charm._on_fortune_action(action_event)
-
-        self.assertTrue(action_event.set_results.called)
-
-    def test_action_fail(self):
-        harness = Harness(CharmcraftReviewCharm)
-        harness.begin()
-        action_event = Mock(params={"fail": "fail this"})
-        harness.charm._on_fortune_action(action_event)
-
-        self.assertEqual(action_event.fail.call_args, [("fail this",)])
diff --git a/tests/unit/requirements.txt b/tests/unit/requirements.txt
new file mode 100644
index 0000000..65431fc
--- /dev/null
+++ b/tests/unit/requirements.txt
@@ -0,0 +1,4 @@
+mock
+pytest
+pytest-cov
+pyyaml
diff --git a/tests/unit/test_charm.py b/tests/unit/test_charm.py
new file mode 100644
index 0000000..dc53ac0
--- /dev/null
+++ b/tests/unit/test_charm.py
@@ -0,0 +1,167 @@
+# Copyright 2020 Canonical Ltd.
+# Licensed under the GPLv3, see LICENCE file for details.
+
+import unittest
+
+from charm import BindK8sCharm
+
+from ops import testing
+from ops.model import ActiveStatus
+
+CONFIG_EMPTY = {
+    'bind_image_path': '',
+    'bind_image_username': '',
+    'bind_image_password': '',
+    'container_config': '',
+    'container_secrets': '',
+    'custom_config_repo': '',
+    'https_proxy': '',
+}
+
+CONFIG_IMAGE_PASSWORD_MISSING = {
+    'bind_image_path': 'example.com/bind:v1',
+    'bind_image_username': 'username',
+    'bind_image_password': '',
+    'container_config': '',
+    'container_secrets': '',
+    'custom_config_repo': '',
+    'https_proxy': '',
+}
+
+CONFIG_VALID = {
+    'bind_image_path': 'example.com/bind:v1',
+    'bind_image_username': '',
+    'bind_image_password': '',
+    'container_config': '',
+    'container_secrets': '',
+    'custom_config_repo': '',
+    'https_proxy': '',
+}
+
+CONFIG_VALID_WITH_CONTAINER_CONFIG = {
+    'bind_image_path': 'example.com/bind:v1',
+    'bind_image_username': '',
+    'bind_image_password': '',
+    'container_config': '"magic_number": 123',
+    'container_secrets': '',
+    'custom_config_repo': '',
+    'https_proxy': '',
+}
+
+CONFIG_VALID_WITH_CONTAINER_CONFIG_AND_SECRETS = {
+    'bind_image_path': 'example.com/bind:v1',
+    'bind_image_username': '',
+    'bind_image_password': '',
+    'container_config': '"magic_number": 123',
+    'container_secrets': '"secret_password": "xyzzy"',
+    'custom_config_repo': '',
+    'https_proxy': '',
+}
+
+
+class TestBindK8s(unittest.TestCase):
+    maxDiff = None
+
+    def setUp(self):
+        self.harness = testing.Harness(BindK8sCharm)
+        self.harness.begin()
+        self.harness.disable_hooks()
+
+    def test_check_for_config_problems_empty_image_path(self):
+        """Confirm that we generate an error if we're not told what image to use."""
+        self.harness.update_config(CONFIG_EMPTY)
+        expected = 'required setting(s) empty: bind_image_path'
+        self.assertEqual(self.harness.charm._check_for_config_problems(), expected)
+
+    def test_check_for_config_problems_empty_image_password(self):
+        """Confirm that we generate an error if we're not given valid registry creds."""
+        self.harness.update_config(CONFIG_IMAGE_PASSWORD_MISSING)
+        expected = 'required setting(s) empty: bind_image_password'
+        self.assertEqual(self.harness.charm._check_for_config_problems(), expected)
+
+    def test_check_for_config_problems_none(self):
+        """Confirm that we accept valid config."""
+        self.harness.update_config(CONFIG_VALID)
+        expected = ''
+        self.assertEqual(self.harness.charm._check_for_config_problems(), expected)
+
+    def test_make_pod_resources(self):
+        """Confirm that we generate the expected pod resources (see LP#1889746)."""
+        expected = {}
+        self.assertEqual(self.harness.charm.make_pod_resources(), expected)
+
+    def test_make_pod_spec_basic(self):
+        """Confirm that we generate the expected pod spec from valid config."""
+        self.harness.update_config(CONFIG_VALID)
+        expected = {
+            'version': 2,
+            'containers': [
+                {
+                    'name': 'bind',
+                    'imageDetails': {'imagePath': 'example.com/bind:v1'},
+                    'ports': [
+                        {'containerPort': 53, 'name': 'domain-tcp', 'protocol': 'TCP'},
+                        {'containerPort': 53, 'name': 'domain-udp', 'protocol': 'UDP'},
+                    ],
+                    'config': {},
+                    'kubernetes': {'readinessProbe': {'exec': {'command': ['/usr/local/bin/dns-check.sh']}}},
+                }
+            ],
+        }
+        self.assertEqual(self.harness.charm.make_pod_spec(), expected)
+
+    def test_make_pod_spec_with_extra_config(self):
+        """Confirm that we generate the expected pod spec from a more involved valid config."""
+        self.harness.update_config(CONFIG_VALID_WITH_CONTAINER_CONFIG)
+        expected = {
+            'version': 2,
+            'containers': [
+                {
+                    'name': 'bind',
+                    'imageDetails': {'imagePath': 'example.com/bind:v1'},
+                    'ports': [
+                        {'containerPort': 53, 'name': 'domain-tcp', 'protocol': 'TCP'},
+                        {'containerPort': 53, 'name': 'domain-udp', 'protocol': 'UDP'},
+                    ],
+                    'config': {'magic_number': 123},
+                    'kubernetes': {'readinessProbe': {'exec': {'command': ['/usr/local/bin/dns-check.sh']}}},
+                }
+            ],
+        }
+        self.assertEqual(self.harness.charm.make_pod_spec(), expected)
+
+    def test_make_pod_spec_with_extra_config_and_secrets(self):
+        """Confirm that we generate the expected pod spec from a more involved valid config that includes secrets."""
+        self.harness.update_config(CONFIG_VALID_WITH_CONTAINER_CONFIG_AND_SECRETS)
+        expected = {
+            'version': 2,
+            'containers': [
+                {
+                    'name': 'bind',
+                    'imageDetails': {'imagePath': 'example.com/bind:v1'},
+                    'ports': [
+                        {'containerPort': 53, 'name': 'domain-tcp', 'protocol': 'TCP'},
+                        {'containerPort': 53, 'name': 'domain-udp', 'protocol': 'UDP'},
+                    ],
+                    'config': {'magic_number': 123, 'secret_password': 'xyzzy'},
+                    'kubernetes': {'readinessProbe': {'exec': {'command': ['/usr/local/bin/dns-check.sh']}}},
+                }
+            ],
+        }
+        self.assertEqual(self.harness.charm.make_pod_spec(), expected)
+
+    def test_configure_pod_as_leader(self):
+        """Confirm that our status is set correctly when we're the leader."""
+        self.harness.enable_hooks()
+        self.harness.set_leader(True)
+        self.harness.update_config(CONFIG_VALID)
+        expected = ActiveStatus('Pod configured')
+        self.assertEqual(self.harness.model.unit.status, expected)
+
+    def test_configure_pod_as_non_leader(self):
+        """Confirm that our status is set correctly when we're not the leader."""
+        self.harness.enable_hooks()
+        self.harness.set_leader(False)
+        self.harness.update_config(CONFIG_VALID)
+        expected = ActiveStatus()
+        self.assertEqual(self.harness.model.unit.status, expected)
diff --git a/tox.ini b/tox.ini
new file mode 100644
index 0000000..91adecf
--- /dev/null
+++ b/tox.ini
@@ -0,0 +1,48 @@
+[tox]
+skipsdist=True
+envlist = unit, functional
+
+[testenv]
+basepython = python3
+setenv =
+  PYTHONPATH = {toxinidir}/build/lib:{toxinidir}/build/venv
+
+[testenv:unit]
+commands =
+    pytest --ignore mod --ignore {toxinidir}/tests/functional \
+      {posargs:-v  --cov=src --cov-report=term-missing --cov-branch}
+deps = -r{toxinidir}/tests/unit/requirements.txt
+       -r{toxinidir}/requirements.txt
+setenv =
+  PYTHONPATH={toxinidir}/src:{toxinidir}/build/lib:{toxinidir}/build/venv
+  TZ=UTC
+
+[testenv:functional]
+passenv =
+  HOME
+  JUJU_REPOSITORY
+  PATH
+commands =
+	pytest -v --ignore mod --ignore {toxinidir}/tests/unit {posargs}
+deps = -r{toxinidir}/tests/functional/requirements.txt
+       -r{toxinidir}/requirements.txt
+
+[testenv:black]
+commands = black --skip-string-normalization --line-length=120 src/ tests/
+deps = black
+
+[testenv:lint]
+commands = flake8 src/ tests/
+# Pin flake8 to 3.7.9 to match focal
+deps =
+    flake8==3.7.9
+
+[flake8]
+exclude =
+    .git,
+    __pycache__,
+    .tox,
+# Ignore E231 because using black creates errors with this
+ignore = E231
+max-line-length = 120
+max-complexity = 10