← Back to team overview

canonical-ubuntu-qa team mailing list archive

[Merge] ~uralt/autopkgtest-cloud:stats-refactor into autopkgtest-cloud:master

 

Ural Tunaboyu has proposed merging ~uralt/autopkgtest-cloud:stats-refactor into autopkgtest-cloud:master.

Requested reviews:
  Canonical's Ubuntu QA (canonical-ubuntu-qa)

For more details, see:
https://code.launchpad.net/~uralt/autopkgtest-cloud/+git/autopkgtest-cloud/+merge/475428

This MP puts the statistics calculations in a dedicated script, intended to be run as a systemd unit on a timer. The results are cached on device in a json format and uploaded to InfluxDB. The functionality is complete but there are several concerns I would like comments on before it is merged:

 - There is a nontrivial amount of code duplication between the utils.py script and this one, particularly in read_config_file to get the autopkgtest db connection. There is also duplication for get_release_arches, get_source_versions, and success_count_for_release_and_arch which also exist in browse.cgi. Due to the hierarchy of the repo it is not easy to simply share this code between these files. 

 - The InfluxDB integration remains untested. Local testing (using containers for example) has not yielded very good results due to API weirdness between influx v1 and v2. I've mimicked what the metrics script does in terms of data structure but we should smoke test the data before calling this good. 

 - I've picked the cache location of stats results to be /tmp, but this may not be the best place for it. 

 - The run interval of the systemd unit is at 6 hours for now. This is most likely overkill considering how slow some of these stats will be to change, but it ensures at least some results are populated quickly if there is no cache. 
-- 
Your team Canonical's Ubuntu QA is requested to review the proposed merge of ~uralt/autopkgtest-cloud:stats-refactor into autopkgtest-cloud:master.
diff --git a/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/stats b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/stats
new file mode 100755
index 0000000..cb0a9f1
--- /dev/null
+++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/stats
@@ -0,0 +1,261 @@
+#!/usr/bin/python3
+
+import configparser
+import json
+import logging
+import os
+import pathlib
+import sqlite3
+import typing
+
+import distro_info
+from influxdb import InfluxDBClient
+
+INFLUXDB_CONTEXT = os.environ["INFLUXDB_CONTEXT"]
+INFLUXDB_DATABASE = os.environ["INFLUXDB_DATABASE"]
+INFLUXDB_HOSTNAME = os.environ["INFLUXDB_HOSTNAME"]
+INFLUXDB_PASSWORD = os.environ["INFLUXDB_PASSWORD"]
+INFLUXDB_PORT = os.environ["INFLUXDB_PORT"]
+INFLUXDB_USERNAME = os.environ["INFLUXDB_USERNAME"]
+
+logger = logging.getLogger(__name__)
+logging.basicConfig(level=logging.INFO)
+
+
+def read_config_file(
+    filepath: typing.Union[str, pathlib.Path], cfg_key: str = None
+):
+    """
+    Reads a given config file, whether it be a key=value env file or
+    properly formatted config file
+    :param filepath
+        Path to config file. Can be a string or pathlib.Path
+    :type filepath: ``type[str | pathlib.Path]``
+    :param cfg_key
+        Variable only necessary when parsing a key=value env file.
+        indicates the first key to be used e.g. cfg["cfg_key"]["env_file_value"]
+        This is a configparser "section" name.
+    :type cfg_key ``str``
+    """
+    config = configparser.ConfigParser()
+    if cfg_key is None:
+        with open(filepath, "r") as f:
+            config.read_file(f)
+        return config
+    else:
+        with open(filepath, "r") as fp:
+            config.read_string(
+                ("[%s]\n" % cfg_key) + fp.read().replace('"', "")
+            )  # read_string preserves "" quotes
+        return config
+
+
+def get_autopkgtest_db_conn():
+    cp = read_config_file("/home/ubuntu/autopkgtest-cloud.conf")
+    db_file = cp["web"]["database_ro"]
+
+    return sqlite3.connect(
+        "file:%s?mode=ro" % db_file, uri=True, check_same_thread=False
+    )
+
+
+def get_release_arches(db_con):
+    """Determine available releases and architectures
+
+    Return release → [arch] dict.
+    """
+    udi = distro_info.UbuntuDistroInfo()
+    all_ubuntu_releases = udi.all
+
+    supported_ubuntu_release = sorted(
+        set(udi.supported() + udi.supported_esm()),
+        key=all_ubuntu_releases.index,
+    )
+
+    release_arches = {}
+    releases = []
+    for row in db_con.execute("SELECT DISTINCT release from test"):
+        if row[0] in supported_ubuntu_release:
+            releases.append(row[0])
+    for r in releases:
+        for row in db_con.execute(
+            "SELECT DISTINCT arch from test WHERE release=?", (r,)
+        ):
+            release_arches.setdefault(r, []).append(row[0])
+    return release_arches
+
+
+def get_source_versions(db_con, release):
+    """Return srcpkg → current_version mapping for given release"""
+
+    srcs = {}
+    for pkg, ver in db_con.execute(
+        "SELECT package, version FROM current_version WHERE release = ?",
+        (release,),
+    ):
+        srcs[pkg] = ver
+    return srcs
+
+
+def success_count_for_release_and_arch(db, release, arch, src_versions):
+    """Return number of packages with tests that pass"""
+
+    count = 0
+
+    # we consider the latest successful run of a package's current version with
+    # triggers for current packages; if none exist (i. e. tests generally fail,
+    # but succeeded for a trigger that is not published), don't count it as
+    # success
+    cur_pkg = None
+    # pylint: disable=unused-variable
+    for pkg, triggers, code in db.execute(
+        "SELECT test.package, triggers, exitcode "
+        "FROM test, result, current_version "
+        "WHERE test.id == result.test_id AND test.release=? AND arch=? "
+        "  AND test.package = current_version.package "
+        "  AND test.release = current_version.release "
+        "  AND result.version = current_version.version "
+        "  AND (exitcode = 0 OR exitcode = 2 OR exitcode = 8) "
+        "ORDER BY test.package, run_id DESC",
+        (release, arch),
+    ):
+        # start of a new package block?
+        if pkg != cur_pkg:
+            # logging.debug('new package start: %s [%s] %i', pkg, triggers, code)
+            cur_pkg = pkg
+            found_valid = False
+        elif found_valid:
+            # we found a valid success, skip over the older test results of
+            # that package
+            # logging.debug('ignored older result: %s [%s] %i', pkg, triggers, code)
+            continue
+        # logging.debug('considered result: %s [%s] %i', pkg, triggers, code)
+        # weed out non-current triggers
+        for trigger in triggers.split():
+            src, ver = trigger.split("/")
+            # it can happen that src_versions does not have a trigger source
+            # pacakge if that trigger source got removed in the final release
+            if src_versions.get(src) != ver:
+                # pylint: disable=line-too-long
+                # logging.debug('%s/%s/%s: ignoring non-current trigger %s', release, arch, pkg, trigger)
+                continue
+        # if we arrive here, we have a passing test result for pkg with current
+        # triggers, so count it
+        # logging.debug('counting result: %s [%s] %i', pkg, triggers, code)
+        count += 1
+        # ... and ignore older results for that package
+        found_valid = True
+
+    return count
+
+
+def calc_stats():
+    logger.debug("Connecting to DB")
+    db_con = get_autopkgtest_db_conn()
+    if not db_con:
+        logger.debug("Unable to get DB con")
+        exit(1)
+    logger.debug("Getting release arches")
+    release_arches = get_release_arches(db_con)
+
+    data = {}
+    for release in release_arches:  # pylint: disable=consider-using-dict-items
+        data[release] = {}
+        for arch in release_arches[release]:
+            data[release][arch] = {}
+
+    logger.debug("Getting package count per arch")
+    for release, arch, numpkgs in db_con.execute(
+        "SELECT release, arch, COUNT(DISTINCT package) "
+        "FROM test "
+        "GROUP BY release, arch"
+    ):
+        if numpkgs > 1:
+            try:
+                data[release][arch]["numpkgs"] = numpkgs
+            except KeyError:
+                pass
+
+    logger.debug("Getting pass info for packages")
+    for release, arch, key, numruns in db_con.execute(
+        "SELECT release, arch, "
+        "       CASE WHEN exitcode IN (0,2,8) "
+        "           THEN 'passruns' ELSE 'failruns' END exit, "
+        "       COUNT(run_id) "
+        "FROM result LEFT JOIN test ON result.test_id = test.id "
+        "GROUP BY release, arch, exit"
+    ):
+        try:
+            data[release][arch][key] = (
+                data[release][arch].get(key, 0) + numruns
+            )
+        except KeyError:
+            pass
+
+    logger.debug("Collating stats")
+    for release in release_arches:  # pylint: disable=consider-using-dict-items
+        sources = get_source_versions(db_con, release)
+        for arch in release_arches[release]:
+            data[release][arch][
+                "numpkgspass"
+            ] = success_count_for_release_and_arch(
+                db_con, release, arch, sources
+            )
+
+    logger.debug("Writing results")
+    # TODO: consider replacing the file location
+    with open("/tmp/autopkgtest_stats.json", "w") as f:
+        json.dump(data, f)
+
+    return data
+
+
+def make_submission(data):
+    measurement = "autopkgtest_arch_release"
+    out = []
+    for release in data:
+        for arch in data[release]:
+            release_arch_data = data[release][arch]
+            for field in ["numpkgs", "numpkgspass", "failruns", "passruns"]:
+                m = {
+                    "measurement": f"{measurement}_{field}",
+                    "fields": {"count": release_arch_data[field]},
+                    "tags": {
+                        "release": release,
+                        "arch": arch,
+                        "instance": INFLUXDB_CONTEXT,
+                    },
+                }
+                out.append(m)
+
+            pass_percent = round(
+                (
+                    release_arch_data["numpkgspass"]
+                    / release_arch_data["numpkgs"]
+                )
+                * 100,
+                2,
+            )
+            m = {
+                "measurement": measurement,
+                "fields": {"percentage": pass_percent},
+                "tags": {
+                    "release": release,
+                    "arch": arch,
+                    "instance": INFLUXDB_CONTEXT,
+                },
+            }
+            out.append(m)
+    return out
+
+
+if __name__ == "__main__":
+    data = calc_stats()
+
+    influx_client = InfluxDBClient(
+        INFLUXDB_HOSTNAME,
+        INFLUXDB_PORT,
+        INFLUXDB_USERNAME,
+        INFLUXDB_PASSWORD,
+        INFLUXDB_DATABASE,
+    )
diff --git a/charms/focal/autopkgtest-cloud-worker/units/stats.service b/charms/focal/autopkgtest-cloud-worker/units/stats.service
new file mode 100644
index 0000000..77c9f1b
--- /dev/null
+++ b/charms/focal/autopkgtest-cloud-worker/units/stats.service
@@ -0,0 +1,10 @@
+[Unit]
+Description=Collect & submit package stats
+
+[Service]
+Type=oneshot
+User=ubuntu
+Group=ubuntu
+EnvironmentFile=/home/ubuntu/influx.cred
+ExecStart=/home/ubuntu/autopkgtest-cloud/tools/stats
+TimeoutStartSec=10m
diff --git a/charms/focal/autopkgtest-cloud-worker/units/stats.timer b/charms/focal/autopkgtest-cloud-worker/units/stats.timer
new file mode 100644
index 0000000..2163032
--- /dev/null
+++ b/charms/focal/autopkgtest-cloud-worker/units/stats.timer
@@ -0,0 +1,8 @@
+[Unit]
+Description=collect package stats (timer)
+
+[Timer]
+*-*-* */6:00:00
+
+[Install]
+WantedBy=autopkgtest.target
diff --git a/charms/focal/autopkgtest-web/webcontrol/browse.cgi b/charms/focal/autopkgtest-web/webcontrol/browse.cgi
index e942f7c..aab55e5 100755
--- a/charms/focal/autopkgtest-web/webcontrol/browse.cgi
+++ b/charms/focal/autopkgtest-web/webcontrol/browse.cgi
@@ -229,58 +229,6 @@ def get_source_versions(db, release):
     return srcs
 
 
-def success_count_for_release_and_arch(db, release, arch, src_versions):
-    """Return number of packages with tests that pass"""
-
-    count = 0
-
-    # we consider the latest successful run of a package's current version with
-    # triggers for current packages; if none exist (i. e. tests generally fail,
-    # but succeeded for a trigger that is not published), don't count it as
-    # success
-    cur_pkg = None
-    # pylint: disable=unused-variable
-    for pkg, triggers, code in db.execute(
-        "SELECT test.package, triggers, exitcode "
-        "FROM test, result, current_version "
-        "WHERE test.id == result.test_id AND test.release=? AND arch=? "
-        "  AND test.package = current_version.package "
-        "  AND test.release = current_version.release "
-        "  AND result.version = current_version.version "
-        "  AND (exitcode = 0 OR exitcode = 2 OR exitcode = 8) "
-        "ORDER BY test.package, run_id DESC",
-        (release, arch),
-    ):
-        # start of a new package block?
-        if pkg != cur_pkg:
-            # logging.debug('new package start: %s [%s] %i', pkg, triggers, code)
-            cur_pkg = pkg
-            found_valid = False
-        elif found_valid:
-            # we found a valid success, skip over the older test results of
-            # that package
-            # logging.debug('ignored older result: %s [%s] %i', pkg, triggers, code)
-            continue
-        # logging.debug('considered result: %s [%s] %i', pkg, triggers, code)
-        # weed out non-current triggers
-        for trigger in triggers.split():
-            src, ver = trigger.split("/")
-            # it can happen that src_versions does not have a trigger source
-            # pacakge if that trigger source got removed in the final release
-            if src_versions.get(src) != ver:
-                # pylint: disable=line-too-long
-                # logging.debug('%s/%s/%s: ignoring non-current trigger %s', release, arch, pkg, trigger)
-                continue
-        # if we arrive here, we have a passing test result for pkg with current
-        # triggers, so count it
-        # logging.debug('counting result: %s [%s] %i', pkg, triggers, code)
-        count += 1
-        # ... and ignore older results for that package
-        found_valid = True
-
-    return count
-
-
 def db_has_result_requester_idx(cursor: sqlite3.Cursor):
     for row in cursor.execute("PRAGMA index_list('result')"):
         if row["name"] == "result_requester_idx":
@@ -982,50 +930,13 @@ def testlist():
 def statistics():
     release_arches = get_release_arches()
 
-    # ensure we don't run into KeyErrors in the jinja template
-    data = {}
-    for release in release_arches:  # pylint: disable=consider-using-dict-items
-        data[release] = {}
-        for arch in release_arches[release]:
-            data[release][arch] = {}
-
-    # number of packages with tests
-    for release, arch, numpkgs in db_con.execute(
-        "SELECT release, arch, COUNT(DISTINCT package) "
-        "FROM test "
-        "GROUP BY release, arch"
-    ):
-        if numpkgs > 1:
-            try:
-                data[release][arch]["numpkgs"] = numpkgs
-            except KeyError:
-                pass
-
-    # number of passed/failed test runs
-    for release, arch, key, numruns in db_con.execute(
-        "SELECT release, arch, "
-        "       CASE WHEN exitcode IN (0,2,8) "
-        "            THEN 'passruns' ELSE 'failruns' END exit, "
-        "       COUNT(run_id) "
-        "FROM result LEFT JOIN test ON result.test_id=test.id "
-        "GROUP BY release, arch, exit"
-    ):
-        try:
-            data[release][arch][key] = (
-                data[release][arch].get(key, 0) + numruns
-            )
-        except KeyError:
-            pass
-
-    # number of packages with tests that pass
-    for release in release_arches:  # pylint: disable=consider-using-dict-items
-        sources = get_source_versions(db_con, release)
-        for arch in release_arches[release]:
-            data[release][arch][
-                "numpkgspass"
-            ] = success_count_for_release_and_arch(
-                db_con, release, arch, sources
-            )
+    # TODO: consider putting this someplace else, centralize
+    #       location setting in config file
+    try:
+        with open("/tmp/autopkgtest_stats.json", "r") as f:
+            data = json.load(f)
+    except FileNotFoundError:
+        data = None
 
     return render(
         "browse-statistics.html", release_arches=release_arches, data=data
diff --git a/charms/focal/autopkgtest-web/webcontrol/templates/browse-statistics.html b/charms/focal/autopkgtest-web/webcontrol/templates/browse-statistics.html
index d1b0690..fbfd468 100644
--- a/charms/focal/autopkgtest-web/webcontrol/templates/browse-statistics.html
+++ b/charms/focal/autopkgtest-web/webcontrol/templates/browse-statistics.html
@@ -2,51 +2,55 @@
 {% block content %}
   <h1 class="page-header">Statistics</h1>
 
-  {% for r in release_arches %}
-    <h3 class="page-header">{{ r.capitalize() }}</h3>
-
-    <table class="table" style="width: auto">
-      <tr>
-        <th>architecture</th>
-        {% for arch in release_arches[r]|sort %}<th>{{arch}}</th>{% endfor %}
-      </tr>
-
-      <tr>
-        <th>#packages with tests</th>
-        {% for arch in release_arches[r]|sort %}<td>{{data[r][arch]['numpkgs']}}</td>{% endfor %}
-      </tr>
-
-      <tr>
-        <th>#packages with passing tests</th>
-        {% for arch in release_arches[r]|sort %}
-          <td>
-            {% if data[r][arch]['numpkgspass'] %}{{data[r][arch]['numpkgspass']}}{% endif %}
-          </td>
-        {% endfor %}
-      </tr>
-
-      <tr>
-        <th>pass rate</th>
-        {% for arch in release_arches[r]|sort %}
-          <td>
-            {% if data[r][arch]['numpkgs'] %}
-              {{'%.1f' % (data[r][arch]['numpkgspass'] * 100 / data[r][arch]['numpkgs'])}}%
-            {% endif %}
-          </td>
-        {% endfor %}
-      </tr>
-
-      <tr>
-        <th>#passed test runs</th>
-        {% for arch in release_arches[r]|sort %}<td>{{data[r][arch]['passruns']}}</td>{% endfor %}
-      </tr>
-
-      <tr>
-        <th>#failed test runs</th>
-        {% for arch in release_arches[r]|sort %}<td>{{data[r][arch]['failruns']}}</td>{% endfor %}
-      </tr>
-    </table>
-
-  {% endfor %}
+  {% if data %}
+    {% for r in release_arches %}
+      <h3 class="page-header">{{ r.capitalize() }}</h3>
+
+      <table class="table" style="width: auto">
+        <tr>
+          <th>architecture</th>
+          {% for arch in release_arches[r]|sort %}<th>{{arch}}</th>{% endfor %}
+        </tr>
+
+        <tr>
+          <th>#packages with tests</th>
+          {% for arch in release_arches[r]|sort %}<td>{{data[r][arch]['numpkgs']}}</td>{% endfor %}
+        </tr>
+
+        <tr>
+          <th>#packages with passing tests</th>
+          {% for arch in release_arches[r]|sort %}
+            <td>
+              {% if data[r][arch]['numpkgspass'] %}{{data[r][arch]['numpkgspass']}}{% endif %}
+            </td>
+          {% endfor %}
+        </tr>
+
+        <tr>
+          <th>pass rate</th>
+          {% for arch in release_arches[r]|sort %}
+            <td>
+              {% if data[r][arch]['numpkgs'] %}
+                {{'%.1f' % (data[r][arch]['numpkgspass'] * 100 / data[r][arch]['numpkgs'])}}%
+              {% endif %}
+            </td>
+          {% endfor %}
+        </tr>
+
+        <tr>
+          <th>#passed test runs</th>
+          {% for arch in release_arches[r]|sort %}<td>{{data[r][arch]['passruns']}}</td>{% endfor %}
+        </tr>
+
+        <tr>
+          <th>#failed test runs</th>
+          {% for arch in release_arches[r]|sort %}<td>{{data[r][arch]['failruns']}}</td>{% endfor %}
+        </tr>
+      </table>
+
+    {% endfor %}
+  {% else %}
+    <p>There are currently no statistics available. They are generated at a maximum interval of 6 hours.</p>
+  {% endif %}
 
 {% endblock content %}