← Back to team overview

launchpad-reviewers team mailing list archive

[Merge] lp:~jtv/launchpad/db-bug-244328 into lp:launchpad/db-devel

 

Jeroen T. Vermeulen has proposed merging lp:~jtv/launchpad/db-bug-244328 into lp:launchpad/db-devel.

Requested reviews:
  Launchpad code reviewers (launchpad-reviewers)
Related bugs:
  Bug #244328 in Launchpad itself: "use PublishingTunableLoop in obsolete-distroseries.py and start-rebuild.py"
  https://bugs.launchpad.net/launchpad/+bug/244328

For more details, see:
https://code.launchpad.net/~jtv/launchpad/db-bug-244328/+merge/56310

= Summary =

Clean up some memory micro-management in the publishing machinery.

Storm used to come with a tiny LRU cache.  Launchpad overrode it with the StupidCache, which just kept everything in memory until something broke.

In order to survive, our most memory-intensive scripts at the time needed to tweak ORM caches and garbage collection themselves: explicit GC runs, cache invalidations, all the stuff your mother told you nice kids didn't do.

The introduction of a generational cache in Storm made all of that unnecessary, but we didn't bother to clean it out of the code.

Then, we re-did the publish-ftpmaster script in python.  This was a shell script that called various python scripts.  With the whole thing running in a single process, every explicit GC run in the sub-scripts had to go through all of the main script's live objects, and a single cache invalidation in a sub-script would toss out all of the main script's ORM cache.  The python version of the script seems a lot slower than the old version, and all that explicit memory management may be the cause.


== Proposed fix ==

Remove some explicit cache flushes, as well as the process_in_batches function.  Note this is an unusual use of the loop tuner in that it doesn't actually commit transactions.  All it adds is debug output and explicit memory management.  So I was able to replace it with simple loops.


== Pre-implementation notes ==

Discussed with Julian; we'll be trying this out on dogfood.  We'll probably want to land the change even if it makes things mildly worse, just because of the cleanup.  Circumstances have changed, and even if it turns out that we still need optimizations of this kind then we're probably better off re-inventing the best fit for the new situation.


== Implementation details ==

The deathrow test more or less accidentally relies on debug output from process_in_batches.


== Tests ==

Nothing specific, but deathrow.txt was affected.
{{{
./bin/test -vvc lp.archivepublisher -t deathrow
}}}


== Demo and Q/A ==

The former Soyuz chaps have built up experience to evaluate performance characteristics.  They'll have to take a look.  Functionally nothing changes.


= Launchpad lint =

I left a bit of lint in place.  I'm not sure any of these cases are worth fixing.


Checking for conflicts and issues in changed files.

Linting changed files:
  lib/lp/archivepublisher/deathrow.py
  lib/lp/archivepublisher/domination.py
  lib/lp/soyuz/scripts/publishdistro.py
  lib/lp/soyuz/scripts/buildd.py
  lib/lp/archivepublisher/tests/deathrow.txt
  lib/lp/archivepublisher/model/ftparchive.py
  lib/lp/archivepublisher/utils.py

./lib/lp/soyuz/scripts/publishdistro.py
     216: E501 line too long (80 characters)
     216: Line exceeds 78 characters.
./lib/lp/soyuz/scripts/buildd.py
      33: E302 expected 2 blank lines, found 1
./lib/lp/archivepublisher/tests/deathrow.txt
       1: narrative uses a moin header.
      32: narrative uses a moin header.
-- 
https://code.launchpad.net/~jtv/launchpad/db-bug-244328/+merge/56310
Your team Launchpad code reviewers is requested to review the proposed merge of lp:~jtv/launchpad/db-bug-244328 into lp:launchpad/db-devel.
=== modified file 'lib/lp/archivepublisher/deathrow.py'
--- lib/lp/archivepublisher/deathrow.py	2011-03-04 15:42:09 +0000
+++ lib/lp/archivepublisher/deathrow.py	2011-04-05 09:58:41 +0000
@@ -19,7 +19,6 @@
     getPubConfig,
     )
 from lp.archivepublisher.diskpool import DiskPool
-from lp.archivepublisher.utils import process_in_batches
 from lp.soyuz.enums import ArchivePurpose
 from lp.soyuz.interfaces.publishing import (
     IBinaryPackagePublishingHistory,
@@ -67,6 +66,7 @@
     removal in the publisher tables, and if they are no longer referenced
     by other packages.
     """
+
     def __init__(self, archive, diskpool, logger):
         self.archive = archive
         self.diskpool = diskpool
@@ -210,6 +210,12 @@
         this will result in the files being removed if they're not otherwise
         in use.
         """
+        # Avoid circular imports.
+        from lp.soyuz.model.publishing import (
+            BinaryPackagePublishingHistory,
+            SourcePackagePublishingHistory,
+            )
+
         bytes = 0
         condemned_files = set()
         condemned_records = set()
@@ -260,26 +266,11 @@
                 condemned_records.add(pub_file.publishing_record)
 
         # Check source and binary publishing records.
-        def check_source(pub_record):
-            # Avoid circular imports.
-            from lp.soyuz.model.publishing import (
-                SourcePackagePublishingHistory)
+        for pub_record in condemned_source_files:
             checkPubRecord(pub_record, SourcePackagePublishingHistory)
-
-        process_in_batches(
-            condemned_source_files, check_source, self.logger,
-            minimum_chunk_size=500)
-
-        def check_binary(pub_record):
-            # Avoid circular imports.
-            from lp.soyuz.model.publishing import (
-                BinaryPackagePublishingHistory)
+        for pub_record in condemned_binary_files:
             checkPubRecord(pub_record, BinaryPackagePublishingHistory)
 
-        process_in_batches(
-            condemned_binary_files, check_binary, self.logger,
-            minimum_chunk_size=500)
-
         self.logger.info(
             "Removing %s files marked for reaping" % len(condemned_files))
 
@@ -310,4 +301,3 @@
                           len(condemned_records))
         for record in condemned_records:
             record.dateremoved = UTC_NOW
-

=== modified file 'lib/lp/archivepublisher/domination.py'
--- lib/lp/archivepublisher/domination.py	2010-12-19 22:47:25 +0000
+++ lib/lp/archivepublisher/domination.py	2011-04-05 09:58:41 +0000
@@ -54,7 +54,6 @@
 
 from datetime import timedelta
 import functools
-import gc
 import operator
 
 import apt_pkg
@@ -62,7 +61,6 @@
 
 from canonical.database.constants import UTC_NOW
 from canonical.database.sqlbase import (
-    clear_current_connection_cache,
     flush_database_updates,
     sqlvalues,
     )
@@ -82,15 +80,6 @@
 STAY_OF_EXECUTION = 1
 
 
-def clear_cache():
-    """Flush SQLObject updates and clear the cache."""
-    # Flush them anyway, should basically be a noop thanks to not doing
-    # lazyUpdate.
-    flush_database_updates()
-    clear_current_connection_cache()
-    gc.collect()
-
-
 # Ugly, but works
 apt_pkg.InitSystem()
 
@@ -282,7 +271,7 @@
             # always equals to "scheduleddeletiondate - quarantine".
             pub_record.datemadepending = UTC_NOW
 
-    def judgeAndDominate(self, dr, pocket, do_clear_cache=True):
+    def judgeAndDominate(self, dr, pocket):
         """Perform the domination and superseding calculations
 
         It only works across the distroseries and pocket specified.
@@ -326,9 +315,6 @@
                 bpph_location_clauses)
             self.debug("Dominating binaries...")
             self._dominatePublications(self._sortPackages(binaries, False))
-            if do_clear_cache:
-                self.debug("Flushing SQLObject cache.")
-                clear_cache()
 
         self.debug("Performing domination across %s/%s (Source)" %
                    (dr.name, pocket.title))

=== modified file 'lib/lp/archivepublisher/model/ftparchive.py'
--- lib/lp/archivepublisher/model/ftparchive.py	2011-03-04 11:36:03 +0000
+++ lib/lp/archivepublisher/model/ftparchive.py	2011-04-05 09:58:41 +0000
@@ -20,7 +20,6 @@
     IStoreSelector,
     MAIN_STORE,
     )
-from lp.archivepublisher.utils import process_in_batches
 from lp.registry.interfaces.pocket import PackagePublishingPocket
 from lp.registry.model.sourcepackagename import SourcePackageName
 from lp.services.command_spawner import (
@@ -436,17 +435,13 @@
         # See `PublishingTunableLoop`.
         self.log.debug("Calculating source overrides")
 
-        def update_source_override(pub_details):
-            updateOverride(*pub_details)
-        process_in_batches(
-            source_publications, update_source_override, self.log)
+        for pub in source_publications:
+            updateOverride(*pub)
 
         self.log.debug("Calculating binary overrides")
 
-        def update_binary_override(pub_details):
-            updateOverride(*pub_details)
-        process_in_batches(
-            binary_publications, update_binary_override, self.log)
+        for pub in binary_publications:
+            updateOverride(*pub)
 
         # Now generate the files on disk...
         for distroseries in overrides:
@@ -673,17 +668,13 @@
         # See `PublishingTunableLoop`.
         self.log.debug("Calculating source filelist.")
 
-        def update_source_filelist(file_details):
+        for file_details in sourcefiles:
             updateFileList(*file_details)
-        process_in_batches(
-            sourcefiles, update_source_filelist, self.log)
 
         self.log.debug("Calculating binary filelist.")
 
-        def update_binary_filelist(file_details):
+        for file_details in binaryfiles:
             updateFileList(*file_details)
-        process_in_batches(
-            binaryfiles, update_binary_filelist, self.log)
 
         for suite, components in filelist.iteritems():
             self.log.debug("Writing file lists for %s" % suite)

=== modified file 'lib/lp/archivepublisher/tests/deathrow.txt'
--- lib/lp/archivepublisher/tests/deathrow.txt	2010-12-22 20:46:21 +0000
+++ lib/lp/archivepublisher/tests/deathrow.txt	2011-04-05 09:58:41 +0000
@@ -246,7 +246,6 @@
     DEBUG Checking superseded_666.dsc (02129bb861061d1a052c592e2dc6b383)
     DEBUG Checking obsolete_666.dsc (02129bb861061d1a052c592e2dc6b383)
     ...
-    DEBUG Batch ...
     INFO Removing 7 files marked for reaping
     DEBUG Removing superseded/superseded_666.dsc from main
     DEBUG Removing superseded-ignored/superseded-bin_666_i386.deb from main
@@ -343,7 +342,6 @@
     DEBUG Cannot remove.
     DEBUG Checking stuck-bin_666_i386.deb (21c2e59531c8710156d34a3c30ac81d5)
     DEBUG Already verified.
-    DEBUG Batch ...
     INFO Removing 0 files marked for reaping
     INFO Total bytes freed: 0
     DEBUG Marking 0 condemned packages as removed.
@@ -369,7 +367,6 @@
     DEBUG Already verified.
     DEBUG Checking stuck-bin_666_i386.deb (21c2e59531c8710156d34a3c30ac81d5)
     DEBUG Already verified.
-    DEBUG Batch ...
     INFO Removing 1 files marked for reaping
     DEBUG Removing removed-ignored/stuck-bin_666_i386.deb from main
     INFO Total bytes freed: 1

=== removed file 'lib/lp/archivepublisher/tests/process-in-batches.txt'
--- lib/lp/archivepublisher/tests/process-in-batches.txt	2010-12-22 20:46:21 +0000
+++ lib/lp/archivepublisher/tests/process-in-batches.txt	1970-01-01 00:00:00 +0000
@@ -1,144 +0,0 @@
-= Processing long iterations in the publishing domain  =
-
-The process_in_batches() function was designed to allow us to perform
-read-only python domain iterations over a large result set keeping
-memory usage acceptable.
-
-It uses `LoopTuner` to process the given 'task' against batches of
-the also given 'input' result set. After each batch is finished it
-cleans up the ORM caches, eliminating cached references that won't be
-used anymore.
-
-It's important to note that this mechanism is clearly only suitable
-for read-only iterations, any modification done in the database
-records by 'task' will be ignored.
-
-    >>> from lp.archivepublisher.utils import process_in_batches
-
-    >>> from lp.services.log.logger import FakeLogger
-    >>> logger = FakeLogger()
-
-The expected 'input' argument is a `SelectResult`. We will use a
-result set containing all PUBLISHED `BinaryPackagePublishingHistory`
-records.
-
-    >>> from lp.soyuz.model.publishing import (
-    ...     BinaryPackagePublishingHistory)
-    >>> from lp.soyuz.enums import (
-    ...     PackagePublishingStatus)
-
-    >>> input = BinaryPackagePublishingHistory.selectBy(
-    ...     status=PackagePublishingStatus.PUBLISHED, orderBy=['id'])
-
-We will record the number of records to use in later checks
-
-    >>> original_set_size = input.count()
-
-As 'task' we will define a callable that simply extract the binary
-publication name.
-
-    >>> def simple_task(binary_publication):
-    ...     ignore = binary_publication.binarypackagerelease.name
-
-That's all we need, calling process_in_batches() with the specific
-'input', 'task' and a 'logger' instance will run as expected.
-
-It uses the given 'logger' to print information (in debug level) about
-the batches being processed and the amount of resident memory used by
-the process calling it.
-
-By default is starts with 10000 records batches and the LoopTuner will
-adjust it according to the default 'goal_seconds' of 2 s,  when
-necessary. The first batch is more than enough to process all
-available published records in the sampledata.
-
-    >>> process_in_batches(input, simple_task, logger)
-    DEBUG Batch [0...) [... MiB]
-
-
-== Coping with changes in the given input ==
-
-The current callsites use process_in_batches() mainly to dump
-information from all PUBLISHED records in a given context.
-
-Since they are the same parts that publish new packages, we can
-guarantee that the result set size will never increase.
-
-On the other hand, it may decrease while the batches get
-processed. For instance, when a package gets deleted or obsoleted,
-which are always performed by a user, so we can't predict when such
-thing will happen.
-
-Considering that the callsites of process_in_batches() and genuinely
-long-run scripts (see publish-distro.py and process-death-row.py). A
-deletion can happen before the batching procedure gets started or even
-while the batches are being processed.
-
-We will select two arbitrary records from the 'input' result set, one
-will be deleted before starting the batch processing and the other one
-will be deleted while processing the first batch.
-
-    >>> delete_before_id = input[-2].id
-    >>> delete_after_id = input[-5].id
-
-The 'task' to be performed is adjusted to delete the second probing
-record once it's called.
-
-    >>> import transaction
-    >>> from lp.registry.interfaces.person import IPersonSet
-    >>> cprov = getUtility(IPersonSet).getByName('cprov')
-
-    >>> def nasty_task(binary_publication):
-    ...     probe = BinaryPackagePublishingHistory.get(delete_after_id)
-    ...     if probe.datesuperseded is None:
-    ...          probe.requestDeletion(cprov, 'on-going deletion.')
-    ...          transaction.commit()
-    ...          print 'One gone'
-    ...     ignore = binary_publication.binarypackagerelease.name
-
-Deleting the first probing record.
-
-    >>> probe = BinaryPackagePublishingHistory.get(delete_before_id)
-    >>> ignore = probe.requestDeletion(cprov, 'deleted before starting')
-
-    >>> transaction.commit()
-
-    >>> start_set_size = input.count()
-    >>> start_set_size < original_set_size
-    True
-
-process_in_batches() copes just fine with both kinds of on-going
-deletions in the given result set.
-
-    >>> process_in_batches(input, nasty_task, logger, goal_seconds=1,
-    ...                    minimum_chunk_size=10)
-    One gone
-    DEBUG Batch [0..10) [... MiB]
-    DEBUG Batch [10...) [... MiB]
-
-    >>> end_set_size = input.count()
-    >>> end_set_size < start_set_size
-    True
-
-process_in_batches() also copes with extreme cases, as having all input
-vanished between the batches. It might confuse the batch processor,
-because it will notice that "more work" has been done than what is
-actually available to do.
-
-    >>> def very_nasty_task(binary_publication):
-    ...     all_pubs = BinaryPackagePublishingHistory.selectBy(
-    ...         status=PackagePublishingStatus.PUBLISHED)
-    ...     if all_pubs.count() == 0:
-    ...         return
-    ...     for pub in all_pubs:
-    ...          pub.requestDeletion(cprov, 'on-going deletion.')
-    ...     print 'All gone'
-    ...     transaction.commit()
-    ...     ignore = binary_publication.binarypackagerelease.name
-
-The code deftly handles this situation.
-
-    >>> process_in_batches(input, very_nasty_task, logger, goal_seconds=1,
-    ...                    minimum_chunk_size=10)
-    All gone
-    DEBUG Batch [0..10) [... MiB]

=== modified file 'lib/lp/archivepublisher/utils.py'
--- lib/lp/archivepublisher/utils.py	2011-03-24 10:12:54 +0000
+++ lib/lp/archivepublisher/utils.py	2011-04-05 09:58:41 +0000
@@ -1,4 +1,4 @@
-# Copyright 2009 Canonical Ltd.  This software is licensed under the
+# Copyright 2009-2011 Canonical Ltd.  This software is licensed under the
 # GNU Affero General Public License version 3 (see the file LICENSE).
 
 """Miscelaneous functions for publisher."""
@@ -9,33 +9,18 @@
     'PublishingTunableLoop',
     'RepositoryIndexFile',
     'get_ppa_reference',
-    'process_in_batches',
     ]
 
 
 import bz2
-import gc
 import gzip
 from operator import itemgetter
 import os
 import stat
 import tempfile
 
-from zope.component import getUtility
-from zope.interface import implements
-
-from canonical.launchpad.interfaces.looptuner import ITunableLoop
-from canonical.launchpad.utilities.looptuner import LoopTuner
-from canonical.launchpad.webapp.interfaces import (
-    DEFAULT_FLAVOR,
-    IStoreSelector,
-    MAIN_STORE,
-    )
-from lp.services.profile.mem import resident
 from lp.soyuz.enums import ArchivePurpose
-from lp.soyuz.interfaces.archive import (
-    default_name_by_purpose,
-    )
+from lp.soyuz.interfaces.archive import default_name_by_purpose
 
 
 def get_ppa_reference(ppa):
@@ -65,103 +50,6 @@
         logger.debug('%-20s %d' % (name, count))
 
 
-# Here begins the hack. Storm + StupidCache are not helping us iterating
-# huge sets of records. The best result was produced by performing the
-# task in small batches with StupidCache enabled and clearing caches with
-# gc and clear_current_connection_cache. All other tested variations were
-# slower and consumed more memory.
-#
-# 1 StupidCache + clear_current_connection_caches() [this];
-# 2 storm.Cache + clear_current_connection_caches() [no difference];
-# 3 StupidCache + store.invalidate(obj) [references left behind];
-# 4 stormCache + store.invalidate(obj)  [references left behind];
-# 5 No batches [memory exhausted].
-
-# XXX JeroenVermeulen 2011-02-03 bug=244328: That was mid-2008.  We have
-# the GenerationalCache now.  We may not need any of this any more.
-
-class PublishingTunableLoop(object):
-    """An `ITunableLoop` for dealing with huge publishing result sets."""
-
-    implements(ITunableLoop)
-
-    def __init__(self, input, task, logger):
-        self.input = input
-        self.task = task
-        self.logger = logger
-        self.total_updated = 0
-        self.offset = 0
-        self.done = False
-
-    def isDone(self):
-        """See `ITunableLoop`."""
-        return self.done
-
-    def __call__(self, chunk_size):
-        """Run the initialized 'task' with a limited batch of 'input'.
-
-        See `ITunableLoop`.
-        """
-        chunk_size = int(chunk_size)
-        start = self.offset
-        end = start + chunk_size
-
-        # The reason why we listify the sliced ResultSet is because we
-        # cannot very it's size using 'count' (see bug #217644 and note
-        # that it was fixed in storm but not SQLObjectResultSet). However,
-        # It's not exactly a problem considering non-empty set will be
-        # iterated anyway.
-        batch = list(self.input[start:end])
-        if len(batch) == 0:
-            self.done = True
-            return
-
-        for pub in batch:
-            self.offset += 1
-            self.task(pub)
-            self.total_updated += 1
-
-        mem_size = resident() / (2 ** 20)
-        self.logger.debug(
-            "Batch [%d..%d) [%d MiB]" % (start, self.offset, mem_size))
-
-        # Invalidate the whole cache for the main store, this we we will also
-        # get rid of all the foreign keys referred by the publishing records.
-        main_store = getUtility(IStoreSelector).get(
-                MAIN_STORE, DEFAULT_FLAVOR)
-        main_store.invalidate()
-        gc.collect()
-
-        # Extra debug not necessary (unwanted, in fact) in production.
-        # Print the number of 'alive' cache items.
-        # count_alive(getUtility(IZStorm).get('main'), self.logger)
-
-
-def process_in_batches(input, task, logger, goal_seconds=60,
-                       minimum_chunk_size=10000):
-    """Use `LoopTuner` to run the given task in smaller batches.
-
-    Run callable 'task' for item of 'input', but do it in small batches
-    cleaning up the cache after each one is processed.
-
-    See `PublishingTunableLoop` for further details.
-
-    :param input: `SelectResult` to be treated;
-    :param task: callable to be executed for each batch;
-    :param logger: `Logger` intance used to print information
-        (debug level);
-    :param goal_seconds: ideal time to spend processing each batch,
-        defaults to 60 s;
-    :param minimum_chunk_size: minimum number of items to be processed in
-        each batch, defaults to 10000
-    """
-    loop = PublishingTunableLoop(input, task, logger)
-    loop_tuner = LoopTuner(
-        loop, goal_seconds=goal_seconds,
-        minimum_chunk_size=minimum_chunk_size, log=logger)
-    loop_tuner.run()
-
-
 class PlainTempFile:
 
     # Filename suffix.

=== modified file 'lib/lp/soyuz/scripts/buildd.py'
--- lib/lp/soyuz/scripts/buildd.py	2010-12-24 15:47:47 +0000
+++ lib/lp/soyuz/scripts/buildd.py	2011-04-05 09:58:41 +0000
@@ -1,4 +1,4 @@
-# Copyright 2009 Canonical Ltd.  This software is licensed under the
+# Copyright 2009-2011 Canonical Ltd.  This software is licensed under the
 # GNU Affero General Public License version 3 (see the file LICENSE).
 
 """Buildd cronscript classes """
@@ -15,7 +15,6 @@
 from canonical.config import config
 from lp.app.errors import NotFoundError
 from lp.archivepublisher.debversion import Version
-from lp.archivepublisher.utils import process_in_batches
 from lp.buildmaster.enums import BuildStatus
 from lp.registry.interfaces.distribution import IDistributionSet
 from lp.registry.interfaces.series import SeriesStatus
@@ -26,6 +25,7 @@
 from lp.soyuz.interfaces.binarypackagebuild import IBinaryPackageBuildSet
 from lp.soyuz.pas import BuildDaemonPackagesArchSpecific
 
+
 # XXX cprov 2009-04-16: This function should live in
 # lp.registry.interfaces.distroseries. It cannot be done right now
 # because we haven't decided if archivepublisher.debversion will be
@@ -78,6 +78,7 @@
         # our methods.
         class _FakeZTM:
             """A fake transaction manager."""
+
             def commit(self):
                 pass
 
@@ -146,17 +147,13 @@
         self.logger.info(
             "Found %d source(s) published." % sources_published.count())
 
-        def process_source(pubrec):
+        for pubrec in sources_published:
             builds = pubrec.createMissingBuilds(
                 architectures_available=architectures_available,
                 pas_verify=pas_verify, logger=self.logger)
             if len(builds) > 0:
                 self.txn.commit()
 
-        process_in_batches(
-            sources_published, process_source, self.logger,
-            minimum_chunk_size=1000)
-
     def addMissingBuildQueueEntries(self, archseries):
         """Create missing Buildd Jobs. """
         self.logger.info("Scanning for build queue entries that are missing")

=== modified file 'lib/lp/soyuz/scripts/publishdistro.py'
--- lib/lp/soyuz/scripts/publishdistro.py	2011-01-12 15:36:59 +0000
+++ lib/lp/soyuz/scripts/publishdistro.py	2011-04-05 09:58:41 +0000
@@ -8,14 +8,8 @@
     'run_publisher',
     ]
 
-import gc
-
 from zope.component import getUtility
 
-from canonical.database.sqlbase import (
-    clear_current_connection_cache,
-    flush_database_updates,
-    )
 from canonical.launchpad.scripts import (
     logger,
     logger_options,
@@ -106,12 +100,7 @@
     def try_and_commit(description, func, *args):
         try:
             func(*args)
-            log.debug("Committing.")
-            flush_database_updates()
             txn.commit()
-            log.debug("Flushing caches.")
-            clear_current_connection_cache()
-            gc.collect()
         except:
             log.exception("Unexpected exception while %s" % description)
             txn.abort()
@@ -130,7 +119,7 @@
     log.debug("  Distribution: %s" % options.distribution)
     log.debug("    Publishing: %s" % careful_msg(options.careful_publishing))
     log.debug("    Domination: %s" % careful_msg(options.careful_domination))
-    if num_exclusive == 0 :
+    if num_exclusive == 0:
         log.debug("Apt-FTPArchive: %s" % careful_msg(options.careful_apt))
     else:
         log.debug("      Indexing: %s" % careful_msg(options.careful_apt))
@@ -189,7 +178,7 @@
     # Consider only archives that have their "to be published" flag turned on
     # or are pending deletion.
     archives = [
-        archive for archive in archives 
+        archive for archive in archives
         if archive.publish or archive.status == ArchiveStatus.DELETING]
 
     for archive in archives:
@@ -202,7 +191,7 @@
         else:
             log.info("Processing %s" % archive.archive_url)
             publisher = getPublisher(archive, allowed_suites, log)
-        
+
         # Do we need to delete the archive or publish it?
         if archive.status == ArchiveStatus.DELETING:
             if archive.purpose == ArchivePurpose.PPA:
@@ -213,23 +202,27 @@
                     "Deletion of %s skipped: operation not supported on %s"
                     % archive.displayname)
         else:
-            try_and_commit("publishing", publisher.A_publish,
-                           options.careful or options.careful_publishing)
+            try_and_commit(
+                "publishing", publisher.A_publish,
+                options.careful or options.careful_publishing)
             # Flag dirty pockets for any outstanding deletions.
             publisher.A2_markPocketsWithDeletionsDirty()
-            try_and_commit("dominating", publisher.B_dominate,
-                           options.careful or options.careful_domination)
+            try_and_commit(
+                "dominating", publisher.B_dominate,
+                options.careful or options.careful_domination)
 
             # The primary and copy archives use apt-ftparchive to generate the
             # indexes, everything else uses the newer internal LP code.
             if archive.purpose in (ArchivePurpose.PRIMARY, ArchivePurpose.COPY):
-                try_and_commit("doing apt-ftparchive", publisher.C_doFTPArchive,
-                               options.careful or options.careful_apt)
+                try_and_commit(
+                    "doing apt-ftparchive", publisher.C_doFTPArchive,
+                    options.careful or options.careful_apt)
             else:
                 try_and_commit("building indexes", publisher.C_writeIndexes,
                                options.careful or options.careful_apt)
 
-            try_and_commit("doing release files", publisher.D_writeReleaseFiles,
-                           options.careful or options.careful_apt)
+            try_and_commit(
+                "doing release files", publisher.D_writeReleaseFiles,
+                options.careful or options.careful_apt)
 
     log.debug("Ciao")


Follow ups