← Back to team overview

launchpad-reviewers team mailing list archive

[Merge] ~cjwatson/launchpad:artifactory-pool into launchpad:master

 

Colin Watson has proposed merging ~cjwatson/launchpad:artifactory-pool into launchpad:master.

Commit message:
Add an Artifactory pool implementation

Requested reviews:
  Launchpad code reviewers (launchpad-reviewers)

For more details, see:
https://code.launchpad.net/~cjwatson/launchpad/+git/launchpad/+merge/419207

This will allow publishing files to Artifactory, using approximately the same API used by the existing `lp.archivepublisher.diskpool` in order to simplify the process of integrating it into the publisher as far as possible.  (The actual publisher integration will come in later branches.)

The main difference between local and Artifactory publishing methods is that for Artifactory Launchpad doesn't generate indexes itself, but rather it sets properties on the artifacts in Artifactory to instruct Artifactory on how each artifact should be indexed.  This means that we need to be able to take an existing artifact and figure out which publications are relevant to it.  We use some custom properties ("launchpad.release-id" and "launchpad.source-name") to assist with this, as well as a custom "launchpad.channel" property which will be used for snap-store-style channel publications.

Testing this is interesting, since we can't realistically bring up an actual test Artifactory instance.  Instead, I added a fixture which fakes the relevant API responses, derived by observing the behaviour of the third-party bindings we're using (I've never seen the actual Artifactory implementation).  This is inevitably incomplete and of course there's a risk that it might be out of sync with production behaviour, but it should be reasonably close and it will be easy enough to fix if it turns out to be wrong.

At the moment this only really implements publishing for Debian-format packages, although I included some early code related to Python wheels as well to illustrate approximately how some of that is likely to work.
-- 
Your team Launchpad code reviewers is requested to review the proposed merge of ~cjwatson/launchpad:artifactory-pool into launchpad:master.
diff --git a/lib/lp/archivepublisher/artifactory.py b/lib/lp/archivepublisher/artifactory.py
new file mode 100644
index 0000000..0ca87be
--- /dev/null
+++ b/lib/lp/archivepublisher/artifactory.py
@@ -0,0 +1,366 @@
+# Copyright 2022 Canonical Ltd.  This software is licensed under the
+# GNU Affero General Public License version 3 (see the file LICENSE).
+
+"""Artifactory pool integration for the publisher."""
+
+__all__ = [
+    "ArtifactoryPool",
+    ]
+
+import logging
+import os
+from pathlib import (
+    Path,
+    PurePath,
+    )
+import tempfile
+from typing import Optional
+
+from artifactory import ArtifactoryPath
+from dohq_artifactory.auth import XJFrogArtApiAuth
+import requests
+
+from lp.archivepublisher.diskpool import (
+    FileAddActionEnum,
+    poolify,
+    )
+from lp.services.config import config
+from lp.services.librarian.utils import copy_and_close
+from lp.soyuz.enums import ArchiveRepositoryFormat
+from lp.soyuz.interfaces.files import (
+    IBinaryPackageFile,
+    IPackageReleaseFile,
+    ISourcePackageReleaseFile,
+    )
+from lp.soyuz.interfaces.publishing import (
+    IBinaryPackagePublishingHistory,
+    NotInPool,
+    PoolFileOverwriteError,
+    )
+
+
+class ArtifactoryPoolEntry:
+
+    def __init__(self, rootpath: ArtifactoryPath, source: str, filename: str,
+                 logger: logging.Logger) -> None:
+        self.rootpath = rootpath
+        self.source = source
+        self.filename = filename
+        self.logger = logger
+
+    def debug(self, *args, **kwargs) -> None:
+        self.logger.debug(*args, **kwargs)
+
+    def pathFor(self, component: Optional[str] = None) -> Path:
+        """Return the path for this file in the given component."""
+        # For Artifactory publication, we ignore the component.  There's
+        # only marginal benefit in having it be explicitly represented in
+        # the pool structure, and doing so would introduce significant
+        # complications in terms of having to keep track of components just
+        # in order to update an artifact's properties.
+        return self.rootpath / poolify(self.source) / self.filename
+
+    def makeReleaseID(self, pub_file: IPackageReleaseFile) -> str:
+        """
+        Return a property describing the xPR that this file belongs to.
+
+        The properties set on a particular file may be derived from multiple
+        publications, so it's helpful to have a way to map each file back to
+        a single `SourcePackageRelease` or `BinaryPackageRelease` so that we
+        can somewhat efficiently decide which files to update with
+        information from which publications.  This returns a string that can
+        be set as the "launchpad.release-id" property to keep track of this.
+        """
+        if ISourcePackageReleaseFile.providedBy(pub_file):
+            return "source:%d" % pub_file.sourcepackagereleaseID
+        elif IBinaryPackageFile.providedBy(pub_file):
+            return "binary:%d" % pub_file.binarypackagereleaseID
+        else:
+            raise AssertionError("Unsupported file: %r" % pub_file)
+
+    def calculateProperties(self, release_id, publications):
+        """Return a dict of Artifactory properties to set for this file.
+
+        Artifactory properties are used (among other things) to describe how
+        a file should be indexed, in a similar sort of way to active
+        publishing records in Launchpad.  However, the semantics are a
+        little different where combinations of publishing dimensions are
+        involved.  Launchpad has a publishing history record for each
+        position in the matrix of possible publishing locations, while
+        Artifactory takes the cross product of a file's properties: for
+        example, a package might be published in Launchpad for focal/amd64,
+        focal/i386, and jammy/amd64, but if we set `"deb.distribution":
+        ["focal", "jammy"], "deb.architecture": ["amd64", "i386"]` then
+        Artifactory will add that file to the indexes for all of
+        focal/amd64, focal/i386, jammy/amd64, jammy/i386.
+
+        In practice, the particular set of publishing dimensions we use for
+        Debian-format PPAs means that this usually only matters in corner
+        cases: PPAs only have a single component ("main"), and the only
+        files published for more than one architecture are those for
+        "Architecture: all" packages, so this effectively only matters when
+        architectures are added or removed between series.  It does mean
+        that we can't produce a faithful rendition of PPAs in Artifactory
+        without splitting up the pool, since we also need to do things like
+        overriding the section and phasing differently across series.
+
+        For other indexing formats, we care about channels, which are set as
+        properties and then used to generate subsidiary Artifactory
+        repositories.  Those potentially intersect with series: a given file
+        might be in "stable" for focal but in "candidate" for jammy.  To
+        avoid problems due to this, we need to express channels to
+        Artifactory in a way that doesn't cause files to end up in incorrect
+        locations in the series/channel matrix.  The simplest approach is to
+        prefix the channel with the series (actually suite), so that the
+        above example might end up as `"launchpad.channel": ["focal:stable",
+        "jammy:candidate"]`.  This can easily be matched by AQL queries and
+        used to generate more specific repositories.
+        """
+        properties = {}
+        properties["launchpad.release-id"] = release_id
+        properties["launchpad.source-name"] = self.source
+        if publications:
+            archives = {publication.archive for publication in publications}
+            if len(archives) > 1:
+                raise AssertionError(
+                    "Can't calculate properties across multiple archives: %s" %
+                    archives)
+            repository_format = tuple(archives)[0].repository_format
+            if repository_format == ArchiveRepositoryFormat.DEBIAN:
+                properties["deb.distribution"] = sorted({
+                    pub.distroseries.getSuite(pub.pocket)
+                    for pub in publications})
+                properties["deb.component"] = sorted({
+                    pub.component.name for pub in publications})
+                architectures = sorted({
+                    pub.distroarchseries.architecturetag
+                    for pub in publications
+                    if IBinaryPackagePublishingHistory.providedBy(pub)})
+                if architectures:
+                    properties["deb.architecture"] = architectures
+            else:
+                properties["launchpad.channel"] = [
+                    "%s:%s" % (
+                        pub.distroseries.getSuite(pub.pocket),
+                        pub.channel_string)
+                    for pub in sorted(
+                        publications,
+                        key=lambda p: (
+                            p.distroseries.version, p.pocket,
+                            p.channel_string))
+                    ]
+        return properties
+
+    def addFile(self, pub_file: IPackageReleaseFile):
+        targetpath = self.pathFor()
+        if not targetpath.parent.exists():
+            targetpath.parent.mkdir()
+        lfa = pub_file.libraryfile
+
+        if targetpath.exists():
+            file_hash = targetpath.stat().sha1
+            sha1 = lfa.content.sha1
+            if sha1 != file_hash:
+                raise PoolFileOverwriteError(
+                    "%s != %s for %s" % (sha1, file_hash, targetpath))
+            return FileAddActionEnum.NONE
+
+        self.debug("Deploying %s", targetpath)
+        properties = self.calculateProperties(
+            self.makeReleaseID(pub_file), [])
+        fd, name = tempfile.mkstemp(prefix="temp-download.")
+        f = os.fdopen(fd, "wb")
+        try:
+            lfa.open()
+            copy_and_close(lfa, f)
+            targetpath.deploy_file(name, parameters=properties)
+        finally:
+            f.close()
+            Path(name).unlink()
+        return FileAddActionEnum.FILE_ADDED
+
+    def updateProperties(self, publications, old_properties=None):
+        targetpath = self.pathFor()
+        if old_properties is None:
+            old_properties = targetpath.properties
+        release_id = old_properties.get("launchpad.release-id")
+        if release_id is None:
+            raise AssertionError(
+                "Cannot update properties: launchpad.release-id is not in %s" %
+                old_properties)
+        properties = self.calculateProperties(release_id, publications)
+        if old_properties != properties:
+            targetpath.properties = properties
+
+    def removeFile(self) -> int:
+        targetpath = self.pathFor()
+        try:
+            size = targetpath.stat().size
+        except OSError:
+            raise NotInPool("%s does not exist; skipping." % targetpath)
+        targetpath.unlink()
+        return size
+
+
+class ArtifactoryPool:
+    """A representation of a pool of packages in Artifactory."""
+
+    results = FileAddActionEnum
+
+    def __init__(self, rootpath, logger: logging.Logger) -> None:
+        if not isinstance(rootpath, ArtifactoryPath):
+            rootpath = ArtifactoryPath(rootpath)
+        write_creds = config.artifactory.write_credentials
+        if write_creds is not None:
+            # The X-JFrog-Art-Api header only needs the API key, not the
+            # username.
+            rootpath.auth = XJFrogArtApiAuth(write_creds.split(":", 1)[1])
+        rootpath.session = self._makeSession()
+        rootpath.timeout = config.launchpad.urlfetch_timeout
+        self.rootpath = rootpath
+        self.logger = logger
+
+    def _makeSession(self) -> requests.Session:
+        """Make a suitable requests session for talking to Artifactory."""
+        # XXX cjwatson 2022-04-01: This somewhat duplicates parts of
+        # lp.services.timeout.URLFetcher.fetch; we should work out a better
+        # abstraction so that we can reuse code more directly.  (The
+        # Artifactory bindings can't be told to use
+        # lp.services.timeout.urlfetch directly, but only given a substitute
+        # session.)
+        session = requests.Session()
+        session.trust_env = False
+        if config.launchpad.http_proxy:
+            session.proxies = {
+                "http": config.launchpad.http_proxy,
+                "https": config.launchpad.https_proxy,
+                }
+        if config.launchpad.ca_certificates_path is not None:
+            session.verify = config.launchpad.ca_certificates_path
+        return session
+
+    def _getEntry(self, sourcename, file) -> ArtifactoryPoolEntry:
+        """See `DiskPool._getEntry`."""
+        return ArtifactoryPoolEntry(
+            self.rootpath, sourcename, file, self.logger)
+
+    def pathFor(self, comp: str, source: str,
+                file: Optional[str] = None) -> Path:
+        """Return the path for the given pool folder or file.
+
+        If file is none, the path to the folder containing all packages
+        for the given source package name will be returned.
+
+        If file is specified, the path to the specific package file will
+        be returned.
+        """
+        # For Artifactory publication, we ignore the component.  There's
+        # only marginal benefit in having it be explicitly represented in
+        # the pool structure, and doing so would introduce significant
+        # complications in terms of having to keep track of components just
+        # in order to update an artifact's properties.
+        path = self.rootpath / poolify(source)
+        if file:
+            path = path / file
+        return path
+
+    def addFile(self, component: str, sourcename: str, filename: str,
+                pub_file: IPackageReleaseFile):
+        """Add a file with the given contents to the pool.
+
+        `sourcename` and `filename` are used to calculate the location.
+
+        pub_file is an `IPackageReleaseFile` providing the file's contents
+        and SHA-1 hash.  The SHA-1 hash is used to compare the given file
+        with an existing file, if one exists.
+
+        There are three possible outcomes:
+        - If the file doesn't exist in the pool, it will be written from the
+        given contents and results.ADDED_FILE will be returned.
+
+        - If the file already exists in the pool, the hash of the file on
+        disk will be calculated and compared with the hash provided. If they
+        fail to match, PoolFileOverwriteError will be raised.
+
+        - If the file already exists and the hash check passes, results.NONE
+        will be returned and nothing will be done.
+
+        This is similar to `DiskPool.addFile`, except that there is no
+        symlink handling and the component is ignored.
+        """
+        entry = self._getEntry(sourcename, filename)
+        return entry.addFile(pub_file)
+
+    def removeFile(self, component: str, sourcename: str,
+                   filename: str) -> int:
+        """Remove the specified file from the pool.
+
+        There are two possible outcomes:
+        - If the specified file does not exist, NotInPool will be raised.
+
+        - If the specified file exists, it will simply be deleted, and its
+        size will be returned.
+
+        This is similar to `DiskPool.removeFile`, except that there is no
+        symlink handling and the component is ignored.
+        """
+        entry = self._getEntry(sourcename, filename)
+        return entry.removeFile()
+
+    def updateProperties(self, sourcename, filename, publications,
+                         old_properties=None):
+        """Update a file's properties in Artifactory."""
+        entry = self._getEntry(sourcename, filename)
+        entry.updateProperties(publications, old_properties=old_properties)
+
+    def getArtifactPatterns(self, repository_format):
+        """Get patterns matching artifacts in a repository of this format.
+
+        The returned patterns are AQL wildcards matching the artifacts that
+        may be pushed for this repository format.  They do not match
+        indexes.
+        """
+        if repository_format == ArchiveRepositoryFormat.DEBIAN:
+            return [
+                "*.ddeb",
+                "*.deb",
+                "*.diff.*",
+                "*.dsc",
+                "*.tar.*",
+                "*.udeb",
+                ]
+        elif repository_format == ArchiveRepositoryFormat.PYTHON:
+            return ["*.whl"]
+        else:
+            raise AssertionError(
+                "Unknown repository format %r" % repository_format)
+
+    def getAllArtifacts(self, repository_name, repository_format):
+        """Get a mapping of all artifacts to their current properties.
+
+        Returns a mapping of path names relative to the repository root to a
+        key/value mapping of properties for each path.
+        """
+        # See the JFrog AQL documentation (URL backslash-newline-wrapped for
+        # length):
+        #   https://www.jfrog.com/confluence/display/JFROG/\
+        #     Artifactory+Query+Language
+        artifacts = self.rootpath.aql(
+            "items.find",
+            {
+                "repo": repository_name,
+                "$or": [
+                    {"name": {"$match": pattern}}
+                    for pattern in self.getArtifactPatterns(repository_format)
+                    ],
+                },
+            ".include",
+            # We don't use "repo", but the AQL documentation says that
+            # non-admin users must include all of "name", "repo", and "path"
+            # in the include directive.
+            ["repo", "path", "name", "property"])
+        return {
+            PurePath(artifact["path"], artifact["name"]): {
+                prop["key"]: prop["value"] for prop in artifact["properties"]
+                }
+            for artifact in artifacts}
diff --git a/lib/lp/archivepublisher/diskpool.py b/lib/lp/archivepublisher/diskpool.py
index fe21aec..3f59826 100644
--- a/lib/lp/archivepublisher/diskpool.py
+++ b/lib/lp/archivepublisher/diskpool.py
@@ -1,7 +1,13 @@
 # Copyright 2009-2016 Canonical Ltd.  This software is licensed under the
 # GNU Affero General Public License version 3 (see the file LICENSE).
 
-__all__ = ['DiskPoolEntry', 'DiskPool', 'poolify', 'unpoolify']
+__all__ = [
+    'DiskPool',
+    'DiskPoolEntry',
+    'FileAddActionEnum',
+    'poolify',
+    'unpoolify',
+    ]
 
 import logging
 import os
@@ -27,12 +33,15 @@ from lp.soyuz.interfaces.publishing import (
     )
 
 
-def poolify(source: str, component: str) -> Path:
+def poolify(source: str, component: Optional[str] = None) -> Path:
     """Poolify a given source and component name."""
     if source.startswith("lib"):
-        return Path(component) / source[:4] / source
+        path = Path(source[:4]) / source
     else:
-        return Path(component) / source[:1] / source
+        path = Path(source[:1]) / source
+    if component is not None:
+        path = Path(component) / path
+    return path
 
 
 def unpoolify(self, path: Path) -> Tuple[str, str, Optional[str]]:
@@ -381,7 +390,7 @@ class DiskPool:
 
     def __init__(self, rootpath, temppath, logger: logging.Logger) -> None:
         self.rootpath = Path(rootpath)
-        self.temppath = Path(temppath)
+        self.temppath = Path(temppath) if temppath is not None else None
         self.entries = {}
         self.logger = logger
 
diff --git a/lib/lp/archivepublisher/tests/artifactory_fixture.py b/lib/lp/archivepublisher/tests/artifactory_fixture.py
new file mode 100644
index 0000000..1503afa
--- /dev/null
+++ b/lib/lp/archivepublisher/tests/artifactory_fixture.py
@@ -0,0 +1,219 @@
+# Copyright 2022 Canonical Ltd.  This software is licensed under the
+# GNU Affero General Public License version 3 (see the file LICENSE).
+
+"""A fixture that simulates parts of the Artifactory API."""
+
+__all__ = [
+    "FakeArtifactoryFixture",
+    ]
+
+from datetime import (
+    datetime,
+    timezone,
+    )
+import fnmatch
+import hashlib
+import json
+from pathlib import Path
+import re
+from urllib.parse import (
+    parse_qs,
+    unquote,
+    urlparse,
+    )
+
+from fixtures import Fixture
+import responses
+
+
+class FakeArtifactoryFixture(Fixture):
+
+    def __init__(self, base_url, repository_name):
+        self.base_url = base_url
+        self.repository_name = repository_name
+        self.repo_url = "%s/%s" % (base_url, self.repository_name)
+        self.api_url = "%s/api/storage/%s" % (
+            self.base_url, self.repository_name)
+        self.search_url = "%s/api/search/aql" % self.base_url
+        self._fs = {}
+        self.add_dir("/")
+
+    def _setUp(self):
+        self.requests_mock = responses.RequestsMock(
+            assert_all_requests_are_fired=False)
+        self.requests_mock.start()
+        self.addCleanup(self.requests_mock.stop)
+        repo_url_regex = re.compile(r"^%s/.*" % re.escape(self.repo_url))
+        api_url_regex = re.compile(r"^%s/.*" % re.escape(self.api_url))
+        self.requests_mock.add(responses.CallbackResponse(
+            method="GET", url=repo_url_regex, callback=self._handle_download,
+            stream=True))
+        self.requests_mock.add_callback(
+            "GET", api_url_regex, callback=self._handle_stat)
+        self.requests_mock.add_callback(
+            "PUT", repo_url_regex, callback=self._handle_upload)
+        self.requests_mock.add_callback(
+            "PUT", api_url_regex, callback=self._handle_set_properties)
+        self.requests_mock.add_callback(
+            "POST", self.search_url, callback=self._handle_aql)
+        self.requests_mock.add_callback(
+            "DELETE", repo_url_regex, callback=self._handle_delete)
+        self.requests_mock.add_callback(
+            "DELETE", api_url_regex, callback=self._handle_delete_properties)
+
+    def add_dir(self, path):
+        now = datetime.now(timezone.utc).isoformat()
+        self._fs[path] = {"created": now, "lastModified": now}
+
+    def add_file(self, path, contents, size, properties):
+        now = datetime.now(timezone.utc).isoformat()
+        body = contents.read(size)
+        self._fs[path] = {
+            "created": now,
+            "lastModified": now,
+            "size": str(size),
+            "checksums": {"sha1": hashlib.sha1(body).hexdigest()},
+            "body": body,
+            "properties": properties,
+            }
+
+    def remove_file(self, path):
+        del self._fs[path]
+
+    def _handle_download(self, request):
+        """Handle a request to download an existing file."""
+        path = urlparse(request.url[len(self.repo_url):]).path
+        if path in self._fs and "size" in self._fs[path]:
+            return (
+                200, {"Content-Type": "application/octet-stream"},
+                self._fs[path]["body"])
+        else:
+            return 404, {}, "Unable to find item"
+
+    def _handle_stat(self, request):
+        """Handle a request to stat an existing file."""
+        parsed_url = urlparse(request.url[len(self.api_url):])
+        path = parsed_url.path
+        if path in self._fs:
+            stat = {"repo": self.repository_name, "path": path}
+            stat.update(self._fs[path])
+            stat.pop("body", None)
+            if parsed_url.query != "properties":
+                stat.pop("properties", None)
+            return 200, {}, json.dumps(stat)
+        else:
+            return 404, {}, "Unable to find item"
+
+    def _handle_upload(self, request):
+        """Handle a request to upload a directory or file."""
+        parsed_url = urlparse(request.url[len(self.repo_url):])
+        path = parsed_url.path
+        if path.endswith("/"):
+            self.add_dir(path.rstrip("/"))
+        elif path.rsplit("/", 1)[0] in self._fs:
+            properties = {}
+            for param in parsed_url.params.split(";"):
+                key, value = param.split("=", 1)
+                properties[unquote(key)] = unquote(value)
+            self.add_file(
+                path, request.body,
+                int(request.headers["Content-Length"]), properties)
+        return 201, {}, ""
+
+    def _handle_set_properties(self, request):
+        """Handle a request to set properties on an existing file."""
+        parsed_url = urlparse(request.url[len(self.api_url):])
+        path = parsed_url.path
+        if path in self._fs:
+            query = parse_qs(parsed_url.query)
+            for param in query["properties"][0].split(";"):
+                key, value = param.split("=", 1)
+                self._fs[path]["properties"][unquote(key)] = unquote(value)
+            return 204, {}, ""
+        else:
+            return 404, {}, "Unable to find item"
+
+    def _handle_delete_properties(self, request):
+        """Handle a request to delete properties from an existing file."""
+        parsed_url = urlparse(request.url[len(self.api_url):])
+        path = parsed_url.path
+        if path in self._fs:
+            query = parse_qs(parsed_url.query)
+            for key in query["properties"][0].split(","):
+                del self._fs[path]["properties"][unquote(key)]
+            return 204, {}, ""
+        else:
+            return 404, {}, "Unable to find item"
+
+    def _make_aql_item(self, path):
+        """Return an AQL response item based on an entry in `self._fs`."""
+        path_obj = Path(path)
+        return {
+            "repo": self.repository_name,
+            "path": path_obj.parent.as_posix()[1:],
+            "name": path_obj.name,
+            "properties": [
+                {"key": key, "value": value}
+                for key, value in sorted(
+                    self._fs[path]["properties"].items())],
+            }
+
+    def _matches_aql(self, item, criteria):
+        """Return True if an item matches some AQL criteria.
+
+        This is definitely incomplete, but good enough for our testing
+        needs.
+
+        https://www.jfrog.com/confluence/display/JFROG/\
+          Artifactory+Query+Language
+        """
+        for key, value in criteria.items():
+            if key == "$and":
+                if not all(self._matches_aql(item, v) for v in value):
+                    return False
+            elif key == "$or":
+                if not any(self._matches_aql(item, v) for v in value):
+                    return False
+            elif key.startswith("$"):
+                raise ValueError("Unhandled AQL operator: %s" % key)
+            elif key in item:
+                if isinstance(value, dict) and len(value) == 1:
+                    [(comp_op, comp_value)] = value.items()
+                    if comp_op == "$match":
+                        if not fnmatch.fnmatch(item[key], comp_value):
+                            return False
+                    else:
+                        raise ValueError(
+                            "Unhandled AQL comparison operator: %s" % key)
+                elif isinstance(value, str):
+                    if item[key] != value:
+                        return False
+                else:
+                    raise ValueError("Unhandled AQL criterion: %r" % value)
+            else:
+                raise ValueError("Unhandled AQL key: %s" % key)
+        return True
+
+    def _handle_aql(self, request):
+        """Handle a request to perform an AQL search.
+
+        No, of course we don't implement a full AQL parser.
+        """
+        match = re.match(
+            r"^items\.find\((.*?)\)\.include\((.*?)\)$", request.body)
+        if match is None:
+            return 400, {}, ""
+        # Treating this as JSON is cheating a bit, but it works.
+        criteria = json.loads(match.group(1))
+        items = [
+            self._make_aql_item(path)
+            for path in sorted(self._fs) if "size" in self._fs[path]]
+        results = [item for item in items if self._matches_aql(item, criteria)]
+        return 200, {}, json.dumps({"results": results})
+
+    def _handle_delete(self, request):
+        """Handle a request to delete an existing file."""
+        path = urlparse(request.url[len(self.repo_url):]).path
+        if not path.endswith("/") and path in self._fs:
+            self.remove_file(path)
+        return 200, {}, ""
diff --git a/lib/lp/archivepublisher/tests/test_artifactory.py b/lib/lp/archivepublisher/tests/test_artifactory.py
new file mode 100644
index 0000000..52ec77b
--- /dev/null
+++ b/lib/lp/archivepublisher/tests/test_artifactory.py
@@ -0,0 +1,327 @@
+# Copyright 2022 Canonical Ltd.  This software is licensed under the
+# GNU Affero General Public License version 3 (see the file LICENSE).
+
+"""Artifactory pool tests."""
+
+from operator import attrgetter
+from pathlib import PurePath
+
+import transaction
+from zope.component import getUtility
+
+from lp.archivepublisher.artifactory import ArtifactoryPool
+from lp.archivepublisher.tests.artifactory_fixture import (
+    FakeArtifactoryFixture,
+    )
+from lp.archivepublisher.tests.test_pool import (
+    FakeReleaseType,
+    PoolTestingFile,
+    )
+from lp.registry.interfaces.pocket import PackagePublishingPocket
+from lp.registry.interfaces.sourcepackage import SourcePackageFileType
+from lp.services.log.logger import BufferLogger
+from lp.soyuz.enums import (
+    ArchivePurpose,
+    ArchiveRepositoryFormat,
+    BinaryPackageFileType,
+    )
+from lp.soyuz.interfaces.publishing import (
+    IPublishingSet,
+    PoolFileOverwriteError,
+    )
+from lp.testing import (
+    TestCase,
+    TestCaseWithFactory,
+    )
+from lp.testing.layers import (
+    BaseLayer,
+    LaunchpadZopelessLayer,
+    )
+
+
+class ArtifactoryPoolTestingFile(PoolTestingFile):
+    """`PoolTestingFile` variant for Artifactory.
+
+    Artifactory publishing doesn't use the component to form paths, and has
+    some additional features.
+    """
+
+    def addToPool(self, component=None):
+        return super().addToPool(None)
+
+    def removeFromPool(self, component=None):
+        return super().removeFromPool(None)
+
+    def checkExists(self, component=None):
+        return super().checkExists(None)
+
+    def checkIsLink(self, component=None):
+        return super().checkIsLink(None)
+
+    def checkIsFile(self, component=None):
+        return super().checkIsFile(None)
+
+    def getProperties(self):
+        path = self.pool.pathFor(None, self.sourcename, self.filename)
+        return path.properties
+
+
+class TestArtifactoryPool(TestCase):
+
+    layer = BaseLayer
+
+    def setUp(self):
+        super().setUp()
+        self.base_url = "https://foo.example.com/artifactory";
+        self.repository_name = "repository"
+        self.artifactory = self.useFixture(
+            FakeArtifactoryFixture(self.base_url, self.repository_name))
+        root_url = "%s/%s/pool" % (self.base_url, self.repository_name)
+        self.pool = ArtifactoryPool(root_url, BufferLogger())
+
+    def test_addFile(self):
+        foo = ArtifactoryPoolTestingFile(
+            self.pool, "foo", "foo-1.0.deb",
+            release_type=FakeReleaseType.BINARY, release_id=1)
+        self.assertFalse(foo.checkIsFile())
+        result = foo.addToPool()
+        self.assertEqual(self.pool.results.FILE_ADDED, result)
+        self.assertTrue(foo.checkIsFile())
+        self.assertEqual(
+            {
+                "launchpad.release-id": "binary:1",
+                "launchpad.source-name": "foo",
+                },
+            foo.getProperties())
+
+    def test_addFile_exists_identical(self):
+        foo = ArtifactoryPoolTestingFile(
+            self.pool, "foo", "foo-1.0.deb",
+            release_type=FakeReleaseType.BINARY, release_id=1)
+        foo.addToPool()
+        self.assertTrue(foo.checkIsFile())
+        result = foo.addToPool()
+        self.assertEqual(self.pool.results.NONE, result)
+        self.assertTrue(foo.checkIsFile())
+
+    def test_addFile_exists_overwrite(self):
+        foo = ArtifactoryPoolTestingFile(
+            self.pool, "foo", "foo-1.0.deb",
+            release_type=FakeReleaseType.BINARY, release_id=1)
+        foo.addToPool()
+        self.assertTrue(foo.checkIsFile())
+        foo.contents = b"different"
+        self.assertRaises(PoolFileOverwriteError, foo.addToPool)
+
+    def test_removeFile(self):
+        foo = ArtifactoryPoolTestingFile(self.pool, "foo", "foo-1.0.deb")
+        foo.addToPool()
+        self.assertTrue(foo.checkIsFile())
+        size = foo.removeFromPool()
+        self.assertFalse(foo.checkExists())
+        self.assertEqual(3, size)
+
+    def test_getArtifactPatterns_debian(self):
+        self.assertEqual(
+            [
+                "*.ddeb",
+                "*.deb",
+                "*.diff.*",
+                "*.dsc",
+                "*.tar.*",
+                "*.udeb",
+                ],
+            self.pool.getArtifactPatterns(ArchiveRepositoryFormat.DEBIAN))
+
+    def test_getArtifactPatterns_python(self):
+        self.assertEqual(
+            ["*.whl"],
+            self.pool.getArtifactPatterns(ArchiveRepositoryFormat.PYTHON))
+
+    def test_getAllArtifacts(self):
+        # getAllArtifacts mostly relies on constructing a correct AQL query,
+        # which we can't meaningfully test without a real Artifactory
+        # instance, although `FakeArtifactoryFixture` tries to do something
+        # with it.  This test mainly ensures that we transform the response
+        # correctly.
+        ArtifactoryPoolTestingFile(
+            self.pool, "foo", "foo-1.0.deb",
+            release_type=FakeReleaseType.BINARY, release_id=1).addToPool()
+        ArtifactoryPoolTestingFile(
+            self.pool, "foo", "foo-1.1.deb",
+            release_type=FakeReleaseType.BINARY, release_id=2).addToPool()
+        ArtifactoryPoolTestingFile(
+            self.pool, "bar", "bar-1.0.whl",
+            release_type=FakeReleaseType.BINARY, release_id=3).addToPool()
+        self.assertEqual(
+            {
+                PurePath("pool/f/foo/foo-1.0.deb"): {
+                    "launchpad.release-id": "binary:1",
+                    "launchpad.source-name": "foo",
+                    },
+                PurePath("pool/f/foo/foo-1.1.deb"): {
+                    "launchpad.release-id": "binary:2",
+                    "launchpad.source-name": "foo",
+                    },
+                },
+            self.pool.getAllArtifacts(
+                self.repository_name, ArchiveRepositoryFormat.DEBIAN))
+        self.assertEqual(
+            {
+                PurePath("pool/b/bar/bar-1.0.whl"): {
+                    "launchpad.release-id": "binary:3",
+                    "launchpad.source-name": "bar",
+                    },
+                },
+            self.pool.getAllArtifacts(
+                self.repository_name, ArchiveRepositoryFormat.PYTHON))
+
+
+class TestArtifactoryPoolFromLibrarian(TestCaseWithFactory):
+
+    layer = LaunchpadZopelessLayer
+
+    def setUp(self):
+        super().setUp()
+        self.base_url = "https://foo.example.com/artifactory";
+        self.repository_name = "repository"
+        self.artifactory = self.useFixture(
+            FakeArtifactoryFixture(self.base_url, self.repository_name))
+        root_url = "%s/%s/pool" % (self.base_url, self.repository_name)
+        self.pool = ArtifactoryPool(root_url, BufferLogger())
+
+    def test_updateProperties_debian_source(self):
+        archive = self.factory.makeArchive(purpose=ArchivePurpose.PPA)
+        dses = [
+            self.factory.makeDistroSeries(distribution=archive.distribution)
+            for _ in range(2)]
+        spph = self.factory.makeSourcePackagePublishingHistory(
+            archive=archive, distroseries=dses[0],
+            pocket=PackagePublishingPocket.RELEASE, component="main",
+            sourcepackagename="foo")
+        spr = spph.sourcepackagerelease
+        sprf = self.factory.makeSourcePackageReleaseFile(
+            sourcepackagerelease=spr,
+            library_file=self.factory.makeLibraryFileAlias(
+                filename="foo_1.0.dsc"),
+            filetype=SourcePackageFileType.DSC)
+        spphs = [spph]
+        spphs.append(spph.copyTo(
+            dses[1], PackagePublishingPocket.RELEASE, archive))
+        transaction.commit()
+        self.pool.addFile(None, spr.name, sprf.libraryfile.filename, sprf)
+        path = self.pool.rootpath / "f" / "foo" / "foo_1.0.dsc"
+        self.assertTrue(path.exists())
+        self.assertFalse(path.is_symlink())
+        self.assertEqual(
+            {
+                "launchpad.release-id": "source:%d" % spr.id,
+                "launchpad.source-name": "foo",
+                },
+            path.properties)
+        self.pool.updateProperties(spr.name, sprf.libraryfile.filename, spphs)
+        self.assertEqual(
+            {
+                "launchpad.release-id": "source:%d" % spr.id,
+                "launchpad.source-name": "foo",
+                "deb.distribution": ",".join(sorted(ds.name for ds in dses)),
+                "deb.component": "main",
+                },
+            path.properties)
+
+    def test_updateProperties_debian_binary_multiple_series(self):
+        archive = self.factory.makeArchive(purpose=ArchivePurpose.PPA)
+        dses = [
+            self.factory.makeDistroSeries(distribution=archive.distribution)
+            for _ in range(2)]
+        processor = self.factory.makeProcessor()
+        dases = [
+            self.factory.makeDistroArchSeries(
+                distroseries=ds, architecturetag=processor.name)
+            for ds in dses]
+        bpph = self.factory.makeBinaryPackagePublishingHistory(
+            archive=archive, distroarchseries=dases[0],
+            pocket=PackagePublishingPocket.RELEASE, component="main",
+            sourcepackagename="foo", binarypackagename="foo",
+            architecturespecific=True)
+        bpr = bpph.binarypackagerelease
+        bpf = self.factory.makeBinaryPackageFile(
+            binarypackagerelease=bpr,
+            library_file=self.factory.makeLibraryFileAlias(
+                filename="foo_1.0_%s.deb" % processor.name),
+            filetype=BinaryPackageFileType.DEB)
+        bpphs = [bpph]
+        bpphs.append(bpph.copyTo(
+            dses[1], PackagePublishingPocket.RELEASE, archive)[0])
+        transaction.commit()
+        self.pool.addFile(
+            None, bpr.sourcepackagename, bpf.libraryfile.filename, bpf)
+        path = (
+            self.pool.rootpath / "f" / "foo" /
+            ("foo_1.0_%s.deb" % processor.name))
+        self.assertTrue(path.exists())
+        self.assertFalse(path.is_symlink())
+        self.assertEqual(
+            {
+                "launchpad.release-id": "binary:%d" % bpr.id,
+                "launchpad.source-name": "foo",
+                },
+            path.properties)
+        self.pool.updateProperties(
+            bpr.sourcepackagename, bpf.libraryfile.filename, bpphs)
+        self.assertEqual(
+            {
+                "launchpad.release-id": "binary:%d" % bpr.id,
+                "launchpad.source-name": "foo",
+                "deb.distribution": ",".join(sorted(
+                    ds.name
+                    for ds in sorted(dses, key=attrgetter("version")))),
+                "deb.component": "main",
+                "deb.architecture": processor.name,
+                },
+            path.properties)
+
+    def test_updateProperties_debian_binary_multiple_architectures(self):
+        archive = self.factory.makeArchive(purpose=ArchivePurpose.PPA)
+        ds = self.factory.makeDistroSeries(distribution=archive.distribution)
+        dases = [
+            self.factory.makeDistroArchSeries(distroseries=ds)
+            for _ in range(2)]
+        bpb = self.factory.makeBinaryPackageBuild(
+            archive=archive, distroarchseries=dases[0],
+            pocket=PackagePublishingPocket.RELEASE, sourcepackagename="foo")
+        bpr = self.factory.makeBinaryPackageRelease(
+            binarypackagename="foo", build=bpb, component="main",
+            architecturespecific=False)
+        bpf = self.factory.makeBinaryPackageFile(
+            binarypackagerelease=bpr,
+            library_file=self.factory.makeLibraryFileAlias(
+                filename="foo_1.0_all.deb"),
+            filetype=BinaryPackageFileType.DEB)
+        bpphs = getUtility(IPublishingSet).publishBinaries(
+            archive, ds, PackagePublishingPocket.RELEASE,
+            {bpr: (bpr.component, bpr.section, bpr.priority, None)})
+        transaction.commit()
+        self.pool.addFile(
+            None, bpr.sourcepackagename, bpf.libraryfile.filename, bpf)
+        path = self.pool.rootpath / "f" / "foo" / "foo_1.0_all.deb"
+        self.assertTrue(path.exists())
+        self.assertFalse(path.is_symlink())
+        self.assertEqual(
+            {
+                "launchpad.release-id": "binary:%d" % bpr.id,
+                "launchpad.source-name": "foo",
+                },
+            path.properties)
+        self.pool.updateProperties(
+            bpr.sourcepackagename, bpf.libraryfile.filename, bpphs)
+        self.assertEqual(
+            {
+                "launchpad.release-id": "binary:%d" % bpr.id,
+                "launchpad.source-name": "foo",
+                "deb.distribution": ds.name,
+                "deb.component": "main",
+                "deb.architecture": ",".join(sorted(
+                    das.architecturetag for das in dases)),
+                },
+            path.properties)
diff --git a/lib/lp/archivepublisher/tests/test_pool.py b/lib/lp/archivepublisher/tests/test_pool.py
index ae65460..6863bbc 100644
--- a/lib/lp/archivepublisher/tests/test_pool.py
+++ b/lib/lp/archivepublisher/tests/test_pool.py
@@ -9,11 +9,25 @@ import shutil
 from tempfile import mkdtemp
 import unittest
 
+from lazr.enum import (
+    EnumeratedType,
+    Item,
+    )
+from zope.interface import (
+    alsoProvides,
+    implementer,
+    )
+
 from lp.archivepublisher.diskpool import (
     DiskPool,
     poolify,
     )
 from lp.services.log.logger import BufferLogger
+from lp.soyuz.interfaces.files import (
+    IBinaryPackageFile,
+    IPackageReleaseFile,
+    ISourcePackageReleaseFile,
+    )
 
 
 class FakeLibraryFileContent:
@@ -41,24 +55,41 @@ class FakeLibraryFileAlias:
         pass
 
 
+class FakeReleaseType(EnumeratedType):
+
+    SOURCE = Item("Source")
+    BINARY = Item("Binary")
+
+
+@implementer(IPackageReleaseFile)
 class FakePackageReleaseFile:
 
-    def __init__(self, contents):
+    def __init__(self, contents, release_type, release_id):
         self.libraryfile = FakeLibraryFileAlias(contents)
+        if release_type == FakeReleaseType.SOURCE:
+            self.sourcepackagereleaseID = release_id
+            alsoProvides(self, ISourcePackageReleaseFile)
+        elif release_type == FakeReleaseType.BINARY:
+            self.binarypackagereleaseID = release_id
+            alsoProvides(self, IBinaryPackageFile)
 
 
 class PoolTestingFile:
 
-    def __init__(self, pool, sourcename, filename):
+    def __init__(self, pool, sourcename, filename,
+                 release_type=FakeReleaseType.BINARY, release_id=1):
         self.pool = pool
         self.sourcename = sourcename
         self.filename = filename
         self.contents = sourcename.encode("UTF-8")
+        self.release_type = release_type
+        self.release_id = release_id
 
     def addToPool(self, component: str):
         return self.pool.addFile(
             component, self.sourcename, self.filename,
-            FakePackageReleaseFile(self.contents))
+            FakePackageReleaseFile(
+                self.contents, self.release_type, self.release_id))
 
     def removeFromPool(self, component: str) -> int:
         return self.pool.removeFile(component, self.sourcename, self.filename)
diff --git a/lib/lp/services/config/schema-lazr.conf b/lib/lp/services/config/schema-lazr.conf
index 84f9294..ef60496 100644
--- a/lib/lp/services/config/schema-lazr.conf
+++ b/lib/lp/services/config/schema-lazr.conf
@@ -33,6 +33,26 @@ dbuser: archivepublisher
 run_parts_location: none
 
 
+[artifactory]
+# Base URL for publishing suitably-configured archives to Artifactory.
+# datatype: string
+base_url: none
+
+# Credentials for reading from Artifactory repositories (formatted as
+# "user:token").
+# datatype: string
+read_credentials: none
+
+# Credentials for writing to Artifactory repositories (formatted as
+# "user:token").
+# datatype: string
+write_credentials: none
+
+# Timeout for requests to Artifactory.
+# datatype: integer
+timeout: 15
+
+
 [binaryfile_expire]
 dbuser: binaryfile-expire
 
diff --git a/lib/lp/soyuz/interfaces/files.py b/lib/lp/soyuz/interfaces/files.py
index c7bc8d3..811b230 100644
--- a/lib/lp/soyuz/interfaces/files.py
+++ b/lib/lp/soyuz/interfaces/files.py
@@ -18,6 +18,7 @@ from zope.schema import (
 
 from lp import _
 from lp.services.librarian.interfaces import ILibraryFileAlias
+from lp.soyuz.interfaces.binarypackagerelease import IBinaryPackageRelease
 from lp.soyuz.interfaces.sourcepackagerelease import ISourcePackageRelease
 
 
@@ -41,10 +42,14 @@ class IPackageReleaseFile(Interface):
 class IBinaryPackageFile(IPackageReleaseFile):
     """A binary package to librarian link record."""
 
-    binarypackagerelease = Int(
-            title=_('The binarypackagerelease being published'),
-            required=True, readonly=False,
-            )
+    binarypackagerelease = Reference(
+        IBinaryPackageRelease,
+        title=_('The binary package release being published'),
+        required=True, readonly=False)
+
+    binarypackagereleaseID = Int(
+        title=_('ID of the binary package release being published'),
+        required=True, readonly=False)
 
 
 class ISourcePackageReleaseFile(IPackageReleaseFile):
diff --git a/requirements/launchpad.txt b/requirements/launchpad.txt
index 257211b..981acfb 100644
--- a/requirements/launchpad.txt
+++ b/requirements/launchpad.txt
@@ -40,6 +40,7 @@ defusedxml==0.6.0
 distro==1.4.0
 dkimpy==1.0.4
 dnspython==1.16.0
+dohq-artifactory==0.7.630
 dulwich==0.19.16
 eggtestinfo==0.3
 enum34==1.1.6
@@ -119,6 +120,7 @@ pygettextpo==0.2
 pygpgme==0.3+lp1
 PyHamcrest==1.9.0
 pyinotify==0.9.4
+PyJWT==1.7.1
 pymacaroons==0.13.0
 pymemcache==3.5.0
 PyNaCl==1.3.0
diff --git a/setup.cfg b/setup.cfg
index a6c8090..db4af9b 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -25,6 +25,7 @@ install_requires =
     defusedxml
     distro
     dkimpy[ed25519]
+    dohq-artifactory
     feedparser
     fixtures
     # Required for gunicorn[gthread].  We depend on it explicitly because