← Back to team overview

launchpad-reviewers team mailing list archive

lp:~edwin-grubbs/launchpad/bug-638924-milestone-page-eager-loading into lp:launchpad

 

Edwin Grubbs has proposed merging lp:~edwin-grubbs/launchpad/bug-638924-milestone-page-eager-loading into lp:launchpad.

Requested reviews:
  Launchpad code reviewers (launchpad-reviewers)
Related bugs:
  #638924 Milestone:+index timeouts with many bugs
  https://bugs.launchpad.net/bugs/638924


Summary
-------

This branch fixes timeouts viewing a distro's milestone +index page with
a ton of bugs. The number of queries will be constant for a milestone
page on a distro or a project. The milestone page for a project group
will make two additional queries for each of its project that has a
milestone with a matching name.


Implementation details
----------------------

Added search functionality to BugTaskSet.search() to eliminate the need
to call getConjoinedMaster() on each bug task found. Also, precached all
the bug assignees and their validity. (Validity is used by tales to
decide whether the person icon should be grayed out.)
    lib/lp/bugs/interfaces/bugtask.py
    lib/lp/bugs/model/bugtask.py
    lib/lp/registry/browser/milestone.py
    lib/lp/registry/browser/tests/test_milestone.py

Moved code for precaching various person attributes from the Person
class into the PersonSet class so that it can be more easily reused.
    lib/lp/registry/interfaces/person.py
    lib/lp/registry/model/person.py

Tests
-----

./bin/test -vv -t 'test_milestone|milestone-views.txt'

Demo and Q/A
------------

* Open https://qastaging.launchpad.net/ubuntu/+milestone/ubuntu-9.10/
    * It should not timeout, and the query count should be under 40.
* Open https://qastaging.launchpad.net/launchpad-project/+milestone/10.12
    * It should not timeout, and the query count should be under 80.
* Open https://qastaging.launchpad.net/launchpad-registry/+milestone/10.12
    * It should not timeout, and the query count should be under 40.

Lint
----

Checking for conflicts and issues in changed files.

Linting changed files:
  lib/lp/bugs/interfaces/bugtask.py
  lib/lp/bugs/model/bugtask.py
  lib/lp/registry/browser/milestone.py
  lib/lp/registry/browser/tests/test_milestone.py
  lib/lp/registry/interfaces/person.py
  lib/lp/registry/model/person.py

./lib/lp/registry/interfaces/person.py
     493: E302 expected 2 blank lines, found 1
    2095: E302 expected 2 blank lines, found 3
-- 
https://code.launchpad.net/~edwin-grubbs/launchpad/bug-638924-milestone-page-eager-loading/+merge/42586
Your team Launchpad code reviewers is requested to review the proposed merge of lp:~edwin-grubbs/launchpad/bug-638924-milestone-page-eager-loading into lp:launchpad.
=== modified file 'lib/lp/bugs/interfaces/bugtask.py'
--- lib/lp/bugs/interfaces/bugtask.py	2010-11-23 12:00:31 +0000
+++ lib/lp/bugs/interfaces/bugtask.py	2010-12-03 03:16:30 +0000
@@ -1145,7 +1145,8 @@
                  hardware_owner_is_subscribed_to_bug=False,
                  hardware_is_linked_to_bug=False,
                  linked_branches=None, structural_subscriber=None,
-                 modified_since=None, created_since=None):
+                 modified_since=None, created_since=None,
+                 exclude_conjoined_tasks=False):
 
         self.bug = bug
         self.searchtext = searchtext
@@ -1191,6 +1192,7 @@
         self.structural_subscriber = structural_subscriber
         self.modified_since = modified_since
         self.created_since = created_since
+        self.exclude_conjoined_tasks = exclude_conjoined_tasks
 
     def setProduct(self, product):
         """Set the upstream context on which to filter the search."""
@@ -1530,6 +1532,12 @@
         This takes into account bug subscription filters.
         """
 
+    def getPrecachedNonConjoinedBugTasks(user, milestone):
+        """List of non-conjoined bugtasks targeted to the milestone.
+
+        The assignee and the assignee's validity are precached.
+        """
+
 
 def valid_remote_bug_url(value):
     """Verify that the URL is to a bug to a known bug tracker."""

=== modified file 'lib/lp/bugs/model/bugtask.py'
--- lib/lp/bugs/model/bugtask.py	2010-11-23 14:20:32 +0000
+++ lib/lp/bugs/model/bugtask.py	2010-12-03 03:16:30 +0000
@@ -43,14 +43,11 @@
     Select,
     SQL,
     )
+from storm.info import ClassAlias
 from storm.store import (
     EmptyResultSet,
     Store,
     )
-from storm.zope.interfaces import (
-    IResultSet,
-    ISQLObjectResultSet,
-    )
 from zope.component import getUtility
 from zope.interface import (
     alsoProvides,
@@ -142,6 +139,7 @@
 from lp.registry.interfaces.milestone import IProjectGroupMilestone
 from lp.registry.interfaces.person import (
     IPerson,
+    IPersonSet,
     validate_person,
     validate_public_person,
     )
@@ -1490,16 +1488,23 @@
         from lp.bugs.model.bug import Bug
         from lp.bugs.model.bugbranch import BugBranch
 
-        bug_ids = list(set(bugtask.bugID for bugtask in bugtasks))
+        bug_ids = set(bugtask.bugID for bugtask in bugtasks)
         bug_ids_with_specifications = set(IStore(SpecificationBug).find(
             SpecificationBug.bugID,
             SpecificationBug.bugID.is_in(bug_ids)))
         bug_ids_with_branches = set(IStore(BugBranch).find(
                 BugBranch.bugID, BugBranch.bugID.is_in(bug_ids)))
 
-        # Cache all bugs at once to avoid one query per bugtask. We
-        # could rely on the Storm cache, but this is explicit.
-        bugs = dict(IStore(Bug).find((Bug.id, Bug), Bug.id.is_in(bug_ids)))
+        # Check if the bugs are cached. If not, cache all uncached bugs
+        # at once to avoid one query per bugtask. We could rely on the
+        # Storm cache, but this is explicit.
+        bugs = dict(
+            (bug.id, bug)
+            for bug in IStore(Bug).find(Bug, Bug.id.is_in(bug_ids)).cached())
+        uncached_ids = bug_ids.difference(bug_id for bug_id in bugs)
+        if len(uncached_ids) > 0:
+            bugs.update(dict(IStore(Bug).find((Bug.id, Bug),
+                                              Bug.id.is_in(uncached_ids))))
 
         badge_properties = {}
         for bugtask in bugtasks:
@@ -1665,6 +1670,48 @@
                 where_cond = search_value_to_where_condition(params.milestone)
             extra_clauses.append("BugTask.milestone %s" % where_cond)
 
+            if params.exclude_conjoined_tasks:
+                # Perform a LEFT JOIN to the conjoined master bugtask.
+                # If the conjoined master is not null, it gets filtered
+                # out.
+                ConjoinedMaster = ClassAlias(BugTask, 'ConjoinedMaster')
+                extra_clauses.append("ConjoinedMaster.id IS NULL")
+                if params.milestone.distribution is not None:
+                    current_series = (
+                        params.milestone.distribution.currentseries)
+                    join = LeftJoin(
+                        ConjoinedMaster,
+                        And(ConjoinedMaster.bug == BugTask.bugID,
+                            ConjoinedMaster.distroseries == current_series.id,
+                            ))
+                    join_tables.append((ConjoinedMaster, join))
+                else:
+                    # Prevent import loop.
+                    from lp.registry.model.product import Product
+                    if IProjectGroupMilestone.providedBy(params.milestone):
+                        dev_focus_ids = list(
+                            IStore(Product).find(
+                                Product.development_focusID,
+                                Product.project == params.milestone.target))
+                    elif params.milestone.product is not None:
+                        dev_focus_ids = [
+                            params.milestone.product.development_focusID]
+                    else:
+                        raise AssertionError(
+                            "A milestone must always have either a project, "
+                            "project group, or distribution")
+                    join = LeftJoin(
+                        ConjoinedMaster,
+                        And(ConjoinedMaster.bug == BugTask.bugID,
+                            ConjoinedMaster.productseriesID.is_in(
+                                dev_focus_ids)))
+                    join_tables.append((ConjoinedMaster, join))
+        elif params.exclude_conjoined_tasks:
+            raise ValueError(
+                "BugTaskSearchParam.exclude_conjoined cannot be True if "
+                "BugTaskSearchParam.milestone is not set")
+
+
         if params.project:
             # Prevent circular import problems.
             from lp.registry.model.product import Product
@@ -2309,6 +2356,25 @@
         """See `IBugTaskSet`."""
         return self._search(BugTask.bugID, [], params).result_set
 
+    def getPrecachedNonConjoinedBugTasks(self, user, milestone):
+        """See `IBugTaskSet`."""
+        params = BugTaskSearchParams(
+            user, milestone=milestone,
+            orderby=['status', '-importance', 'id'],
+            omit_dupes=True, exclude_conjoined_tasks=True)
+        non_conjoined_slaves = self.search(params)
+
+        def cache_people(rows):
+            assignee_ids = set(
+                bug_task.assigneeID for bug_task in rows)
+            assignees = getUtility(IPersonSet).getPrecachedPersonsFromIDs(
+                assignee_ids, need_validity=True)
+            # Execute query to load storm cache.
+            list(assignees)
+
+        return DecoratedResultSet(
+            non_conjoined_slaves, pre_iter_hook=cache_people)
+
     def createTask(self, bug, owner, product=None, productseries=None,
                    distribution=None, distroseries=None,
                    sourcepackagename=None,

=== modified file 'lib/lp/registry/browser/milestone.py'
--- lib/lp/registry/browser/milestone.py	2010-11-23 23:22:27 +0000
+++ lib/lp/registry/browser/milestone.py	2010-12-03 03:16:30 +0000
@@ -41,7 +41,6 @@
     precache_permission_for_objects,
     )
 from canonical.launchpad.webapp.breadcrumb import Breadcrumb
-from canonical.launchpad.webapp.interfaces import ILaunchBag
 from canonical.launchpad.webapp.menu import (
     ApplicationMenu,
     ContextMenu,
@@ -57,7 +56,6 @@
     )
 from lp.bugs.browser.bugtask import BugTaskListingItem
 from lp.bugs.interfaces.bugtask import (
-    BugTaskSearchParams,
     IBugTaskSet,
     )
 from lp.registry.browser import (
@@ -249,23 +247,11 @@
     @cachedproperty
     def _bugtasks(self):
         """The list of non-conjoined bugtasks targeted to this milestone."""
-        user = getUtility(ILaunchBag).user
-        params = BugTaskSearchParams(user, milestone=self.context,
-                    orderby=['status', '-importance', 'id'],
-                    omit_dupes=True)
-        tasks = getUtility(IBugTaskSet).search(params)
-        # We could replace all the code below with a simple
-        # >>> [task for task in tasks if task.conjoined_master is None]
-        # But that'd cause one extra hit to the DB for every bugtask returned
-        # by the search above, so we do a single query to get all of a task's
-        # siblings here and use that to find whether or not a given bugtask
-        # has a conjoined master.
-        bugs_and_tasks = getUtility(IBugTaskSet).getBugTasks(
-            [task.bug.id for task in tasks])
-        non_conjoined_slaves = []
-        for task in tasks:
-            if task.getConjoinedMaster(bugs_and_tasks[task.bug]) is None:
-                non_conjoined_slaves.append(task)
+        # Put the results in a list so that iterating over it multiple
+        # times in this method does not make multiple queries.
+        non_conjoined_slaves = list(
+            getUtility(IBugTaskSet).getPrecachedNonConjoinedBugTasks(
+                self.user, self.context))
         # Checking bug permissions is expensive. We know from the query that
         # the user has at least launchpad.View on the bugtasks and their bugs.
         precache_permission_for_objects(

=== modified file 'lib/lp/registry/browser/tests/test_milestone.py'
--- lib/lp/registry/browser/tests/test_milestone.py	2010-10-26 15:47:24 +0000
+++ lib/lp/registry/browser/tests/test_milestone.py	2010-12-03 03:16:30 +0000
@@ -5,9 +5,13 @@
 
 __metaclass__ = type
 
+from textwrap import dedent
+
 from testtools.matchers import LessThan
 from zope.component import getUtility
 
+from canonical.config import config
+from canonical.launchpad.interfaces.launchpad import ILaunchpadCelebrities
 from canonical.launchpad.webapp import canonical_url
 from canonical.testing.layers import DatabaseFunctionalLayer
 from lp.bugs.interfaces.bugtask import IBugTaskSet
@@ -15,14 +19,16 @@
     ANONYMOUS,
     login,
     login_person,
+    login_team,
     logout,
     person_logged_in,
+    StormStatementRecorder,
     TestCaseWithFactory,
     )
+from lp.testing._webservice import QueryCollector
 from lp.testing.matchers import HasQueryCount
 from lp.testing.memcache import MemcacheTestCase
 from lp.testing.views import create_initialized_view
-from lp.testing._webservice import QueryCollector
 
 
 class TestMilestoneViews(TestCaseWithFactory):
@@ -163,10 +169,74 @@
         self.assertEqual(0, product.development_focus.all_bugtasks.count())
 
 
-class TestMilestoneIndex(TestCaseWithFactory):
+class TestProjectMilestoneIndexQueryCount(TestCaseWithFactory):
 
     layer = DatabaseFunctionalLayer
 
+    def setUp(self):
+        super(TestProjectMilestoneIndexQueryCount, self).setUp()
+        # Increase cache size so that the query counts aren't affected
+        # by objects being removed from the cache early.
+        config.push('storm-cache', dedent('''
+            [launchpad]
+            storm_cache_size: 1000
+            '''))
+        self.addCleanup(config.pop, 'storm-cache')
+        owner = self.factory.makePerson(name='product-owner')
+        self.product = self.factory.makeProduct(owner=owner)
+        login_person(self.product.owner)
+        self.milestone = self.factory.makeMilestone(
+            productseries=self.product.development_focus)
+
+    def add_bug(self, count):
+        login_person(self.product.owner)
+        for i in range(count):
+            bug = self.factory.makeBug(product=self.product)
+            bug.bugtasks[0].transitionToMilestone(
+                self.milestone, self.product.owner)
+            # This is necessary to test precaching of assignees.
+            bug.bugtasks[0].transitionToAssignee(
+                self.factory.makePerson())
+        logout()
+
+    def test_bugtasks_queries(self):
+        # The view.bugtasks attribute will make four queries:
+        #  1. Load bugtasks and bugs.
+        #  2. Load assignees (Person, Account, and EmailAddress).
+        #  3. Load links to specifications.
+        #  4. Load links to branches.
+        bugtask_count = 10
+        self.add_bug(bugtask_count)
+        login_person(self.product.owner)
+        view = create_initialized_view(self.milestone, '+index')
+        # Eliminate permission check for the admin team from the
+        # recorded queries by loading it now. If the test ever breaks,
+        # the person fixing it won't waste time trying to track this
+        # query down.
+        getUtility(ILaunchpadCelebrities).admin
+        with StormStatementRecorder() as recorder:
+            bugtasks = list(view.bugtasks)
+        self.assertThat(recorder, HasQueryCount(LessThan(5)))
+        self.assertEqual(bugtask_count, len(bugtasks))
+
+    def test_milestone_eager_loading(self):
+        # Verify that the number of queries does not increase with more
+        # bugs with different assignees.
+        collector = QueryCollector()
+        collector.register()
+        self.addCleanup(collector.unregister)
+        milestone_url = canonical_url(self.milestone)
+        query_limit = 34
+        browser = self.getUserBrowser(user=self.product.owner)
+
+        self.add_bug(3)
+        browser.open(milestone_url)
+        self.assertThat(collector, HasQueryCount(LessThan(query_limit)))
+
+        self.add_bug(10)
+        browser.open(milestone_url)
+        self.assertThat(collector, HasQueryCount(LessThan(query_limit)))
+
     def test_more_private_bugs_query_count_is_constant(self):
         # This test tests that as we add more private bugs to a milestone
         # index page, the number of queries issued by the page does not
@@ -175,6 +245,7 @@
         # is very large already, if the test fails due to such a change,
         # please cut some more of the existing fat out of it rather than
         # increasing the cap.
+        page_query_limit = 34
         product = self.factory.makeProduct()
         login_person(product.owner)
         milestone = self.factory.makeMilestone(
@@ -199,8 +270,6 @@
         collector = QueryCollector()
         collector.register()
         self.addCleanup(collector.unregister)
-        # This page is rather high in queries : 46 as a baseline. 20100817
-        page_query_limit = 46
         browser.open(milestone_url)
         # Check that the test found the bug
         self.assertTrue(bug1_url in browser.contents)
@@ -228,3 +297,168 @@
         self.assertEqual(with_1_private_bug, with_3_private_bugs,
             "different query count: \n%s\n******************\n%s\n" % (
             '\n'.join(with_1_queries), '\n'.join(with_3_queries)))
+
+
+class TestProjectGroupMilestoneIndexQueryCount(TestCaseWithFactory):
+
+    layer = DatabaseFunctionalLayer
+
+    def setUp(self):
+        super(TestProjectGroupMilestoneIndexQueryCount, self).setUp()
+        # Increase cache size so that the query counts aren't affected
+        # by objects being removed from the cache early.
+        config.push('storm-cache', dedent('''
+            [launchpad]
+            storm_cache_size: 1000
+            '''))
+        self.addCleanup(config.pop, 'storm-cache')
+        self.owner = self.factory.makePerson(name='product-owner')
+        self.project_group = self.factory.makeProject(owner=self.owner)
+        login_person(self.owner)
+        self.milestone_name = 'foo'
+        # A ProjectGroup milestone doesn't exist unless one of its
+        # Projects has a milestone of that name.
+        product = self.factory.makeProduct(
+            owner=self.owner, project=self.project_group)
+        self.factory.makeMilestone(
+            productseries=product.development_focus,
+            name=self.milestone_name)
+        self.milestone = self.project_group.getMilestone(
+            self.milestone_name)
+
+    def add_bug(self, count, milestone):
+        login_person(self.owner)
+        for i in range(count):
+            bug = self.factory.makeBug(product=milestone.product)
+            bug.bugtasks[0].transitionToMilestone(
+                milestone, self.owner)
+            # This is necessary to test precaching of assignees.
+            bug.bugtasks[0].transitionToAssignee(
+                self.factory.makePerson())
+        logout()
+
+    def test_bugtasks_queries(self):
+        # The view.bugtasks attribute will make four queries:
+        #  1. Load bugtasks and bugs.
+        #  2. Load assignees (Person, Account, and EmailAddress).
+        #  3. Load links to specifications.
+        #  4. Load links to branches.
+        bugtask_count = 10
+        product = self.factory.makeProduct(
+            owner=self.owner, project=self.project_group)
+        milestone = self.factory.makeMilestone(
+            productseries=product.development_focus,
+            name=self.milestone_name)
+        self.add_bug(bugtask_count, milestone)
+        login_person(self.owner)
+        view = create_initialized_view(self.milestone, '+index')
+        # Eliminate permission check for the admin team from the
+        # recorded queries by loading it now. If the test ever breaks,
+        # the person fixing it won't waste time trying to track this
+        # query down.
+        getUtility(ILaunchpadCelebrities).admin
+        with StormStatementRecorder() as recorder:
+            bugtasks = list(view.bugtasks)
+        self.assertThat(recorder, HasQueryCount(LessThan(8)))
+        self.assertEqual(bugtask_count, len(bugtasks))
+
+    def test_milestone_eager_loading(self):
+        # Verify that the number of queries does not increase with more
+        # bugs with different assignees as long as the number of
+        # projects doesn't increase.
+        collector = QueryCollector()
+        collector.register()
+        self.addCleanup(collector.unregister)
+        milestone_url = canonical_url(self.milestone)
+        query_limit = 37
+        browser = self.getUserBrowser(user=self.owner)
+
+        login_person(self.owner)
+        product = self.factory.makeProduct(
+            owner=self.owner, project=self.project_group)
+        milestone = self.factory.makeMilestone(
+            productseries=product.development_focus,
+            name=self.milestone_name)
+        self.add_bug(1, milestone)
+        browser.open(milestone_url)
+        self.assertThat(collector, HasQueryCount(LessThan(query_limit)))
+
+        self.add_bug(10, milestone)
+        browser.open(milestone_url)
+        self.assertThat(collector, HasQueryCount(LessThan(query_limit)))
+
+
+class TestDistributionMilestoneIndexQueryCount(TestCaseWithFactory):
+
+    layer = DatabaseFunctionalLayer
+
+    def setUp(self):
+        super(TestDistributionMilestoneIndexQueryCount, self).setUp()
+        # Increase cache size so that the query counts aren't affected
+        # by objects being removed from the cache early.
+        config.push('storm-cache', dedent('''
+            [launchpad]
+            storm_cache_size: 1000
+            '''))
+        self.addCleanup(config.pop, 'storm-cache')
+        self.ubuntu = getUtility(ILaunchpadCelebrities).ubuntu
+        self.owner = self.factory.makePerson(name='test-owner')
+        login_team(self.ubuntu.owner)
+        self.ubuntu.owner = self.owner
+        self.sourcepackagename = self.factory.makeSourcePackageName(
+            'foo-package')
+        login_person(self.owner)
+        self.milestone = self.factory.makeMilestone(
+            distribution=self.ubuntu)
+
+    def add_bug(self, count):
+        login_person(self.owner)
+        for i in range(count):
+            bug = self.factory.makeBug(distribution=self.ubuntu)
+            distrosourcepackage = self.factory.makeDistributionSourcePackage(
+                distribution=self.ubuntu)
+            bug.bugtasks[0].transitionToTarget(distrosourcepackage)
+            bug.bugtasks[0].transitionToMilestone(
+                self.milestone, self.owner)
+            # This is necessary to test precaching of assignees.
+            bug.bugtasks[0].transitionToAssignee(
+                self.factory.makePerson())
+        logout()
+
+    def test_bugtasks_queries(self):
+        # The view.bugtasks attribute will make four queries:
+        #  1. Load bugtasks and bugs.
+        #  2. Load assignees (Person, Account, and EmailAddress).
+        #  3. Load links to specifications.
+        #  4. Load links to branches.
+        bugtask_count = 10
+        self.add_bug(bugtask_count)
+        login_person(self.owner)
+        view = create_initialized_view(self.milestone, '+index')
+        # Eliminate permission check for the admin team from the
+        # recorded queries by loading it now. If the test ever breaks,
+        # the person fixing it won't waste time trying to track this
+        # query down.
+        getUtility(ILaunchpadCelebrities).admin
+        with StormStatementRecorder() as recorder:
+            bugtasks = list(view.bugtasks)
+        self.assertThat(recorder, HasQueryCount(LessThan(8)))
+        self.assertEqual(bugtask_count, len(bugtasks))
+
+    def test_milestone_eager_loading(self):
+        # Verify that the number of queries does not increase with more
+        # bugs with different assignees.
+        collector = QueryCollector()
+        collector.register()
+        self.addCleanup(collector.unregister)
+        milestone_url = canonical_url(self.milestone)
+        query_limit = 32
+        browser = self.getUserBrowser(user=self.owner)
+
+        self.add_bug(3)
+        browser.open(milestone_url)
+        self.assertThat(collector, HasQueryCount(LessThan(query_limit)))
+
+        self.add_bug(10)
+        browser.open(milestone_url)
+        self.assertThat(collector, HasQueryCount(LessThan(query_limit)))

=== modified file 'lib/lp/registry/interfaces/person.py'
--- lib/lp/registry/interfaces/person.py	2010-11-22 23:08:11 +0000
+++ lib/lp/registry/interfaces/person.py	2010-12-03 03:16:30 +0000
@@ -2073,6 +2073,24 @@
     def cacheBrandingForPeople(people):
         """Prefetch Librarian aliases and content for personal images."""
 
+    def getPrecachedPersonsFromIDs(
+        person_ids, need_karma=False, need_ubuntu_coc=False,
+        need_location=False, need_archive=False,
+        need_preferred_email=False, need_validity=False):
+        """Lookup person objects from ids with optional precaching.
+
+        :param person_ids: List of person ids.
+        :param need_karma: The karma attribute will be cached.
+        :param need_ubuntu_coc: The is_ubuntu_coc_signer attribute will be
+            cached.
+        :param need_location: The location attribute will be cached.
+        :param need_archive: The archive attribute will be cached.
+        :param need_preferred_email: The preferred email attribute will be
+            cached.
+        :param need_validity: The is_valid attribute will be cached.
+        """
+
+
 
 class IRequestPeopleMerge(Interface):
     """This schema is used only because we want a very specific vocabulary."""

=== modified file 'lib/lp/registry/model/person.py'
--- lib/lp/registry/model/person.py	2010-12-01 23:39:05 +0000
+++ lib/lp/registry/model/person.py	2010-12-03 03:16:30 +0000
@@ -1669,7 +1669,6 @@
         #       The difference between the two is that
         #       getMembersWithPreferredEmails includes self, which is arguably
         #       wrong, but perhaps deliberate.
-        store = Store.of(self)
         origin = [
             Person,
             Join(TeamParticipation, TeamParticipation.person == Person.id),
@@ -1679,99 +1678,12 @@
             TeamParticipation.team == self.id,
             # But not the team itself.
             TeamParticipation.person != self.id)
-        columns = [Person]
-        decorators = []
-        if need_karma:
-            # New people have no karmatotalcache rows.
-            origin.append(
-                LeftJoin(KarmaTotalCache,
-                    KarmaTotalCache.person == Person.id))
-            columns.append(KarmaTotalCache)
-        if need_ubuntu_coc:
-            columns.append(Alias(Exists(Select(SignedCodeOfConduct,
-                And(Person._is_ubuntu_coc_signer_condition(),
-                    SignedCodeOfConduct.ownerID == Person.id))),
-                name='is_ubuntu_coc_signer'))
-        if need_location:
-            # New people have no location rows
-            origin.append(
-                LeftJoin(PersonLocation,
-                    PersonLocation.person == Person.id))
-            columns.append(PersonLocation)
-        if need_archive:
-            # Not everyone has PPA's
-            # It would be nice to cleanly expose the soyuz rules for this to
-            # avoid duplicating the relationships.
-            origin.append(
-                LeftJoin(Archive, Archive.owner == Person.id))
-            columns.append(Archive)
-            conditions = And(conditions,
-                Or(Archive.id == None, And(
-                Archive.id == Select(Min(Archive.id),
-                    Archive.owner == Person.id, Archive),
-                Archive.purpose == ArchivePurpose.PPA)))
-        # checking validity requires having a preferred email.
-        if need_preferred_email and not need_validity:
-            # Teams don't have email, so a left join
-            origin.append(
-                LeftJoin(EmailAddress, EmailAddress.person == Person.id))
-            columns.append(EmailAddress)
-            conditions = And(conditions,
-                Or(EmailAddress.status == None,
-                    EmailAddress.status == EmailAddressStatus.PREFERRED))
-        if need_validity:
-            valid_stuff = Person._validity_queries()
-            origin.extend(valid_stuff["joins"])
-            columns.extend(valid_stuff["tables"])
-            decorators.extend(valid_stuff["decorators"])
-        if len(columns) == 1:
-            columns = columns[0]
-            # Return a simple ResultSet
-            return store.using(*origin).find(columns, conditions)
-        # Adapt the result into a cached Person.
-        columns = tuple(columns)
-        raw_result = store.using(*origin).find(columns, conditions)
-
-        def prepopulate_person(row):
-            result = row[0]
-            cache = get_property_cache(result)
-            index = 1
-            #-- karma caching
-            if need_karma:
-                karma = row[index]
-                index += 1
-                if karma is None:
-                    karma_total = 0
-                else:
-                    karma_total = karma.karma_total
-                cache.karma = karma_total
-            #-- ubuntu code of conduct signer status caching.
-            if need_ubuntu_coc:
-                signed = row[index]
-                index += 1
-                cache.is_ubuntu_coc_signer = signed
-            #-- location caching
-            if need_location:
-                location = row[index]
-                index += 1
-                cache.location = location
-            #-- archive caching
-            if need_archive:
-                archive = row[index]
-                index += 1
-                cache.archive = archive
-            #-- preferred email caching
-            if need_preferred_email and not need_validity:
-                email = row[index]
-                index += 1
-                cache.preferredemail = email
-            for decorator in decorators:
-                column = row[index]
-                index += 1
-                decorator(result, column)
-            return result
-        return DecoratedResultSet(raw_result,
-            result_decorator=prepopulate_person)
+        return PersonSet()._getPrecachedPersons(
+            origin, conditions,
+            need_karma=need_karma, need_ubuntu_coc=need_ubuntu_coc,
+            need_location=need_location, need_archive=need_archive,
+            need_preferred_email=need_preferred_email,
+            need_validity=need_validity)
 
     def _getMembersWithPreferredEmails(self):
         """Helper method for public getMembersWithPreferredEmails.
@@ -4126,6 +4038,135 @@
         list(LibraryFileAlias.select("LibraryFileAlias.id IN %s"
              % sqlvalues(aliases), prejoins=["content"]))
 
+    def getPrecachedPersonsFromIDs(
+        self, person_ids, need_karma=False, need_ubuntu_coc=False,
+        need_location=False, need_archive=False,
+        need_preferred_email=False, need_validity=False):
+        """See `IPersonSet`."""
+        origin = [Person]
+        conditions = [
+            Person.id.is_in(person_ids)]
+        return self._getPrecachedPersons(
+            origin, conditions,
+            need_karma=need_karma, need_ubuntu_coc=need_ubuntu_coc,
+            need_location=need_location, need_archive=need_archive,
+            need_preferred_email=need_preferred_email,
+            need_validity=need_validity)
+
+    def _getPrecachedPersons(
+        self, origin, conditions,
+        need_karma=False, need_ubuntu_coc=False,
+        need_location=False, need_archive=False, need_preferred_email=False,
+        need_validity=False):
+        """Lookup all members of the team with optional precaching.
+
+        :param origin: List of storm tables and joins. This list will be
+            appended to. The Person table is required.
+        :param conditions: Storm conditions for tables in origin.
+        :param need_karma: The karma attribute will be cached.
+        :param need_ubuntu_coc: The is_ubuntu_coc_signer attribute will be
+            cached.
+        :param need_location: The location attribute will be cached.
+        :param need_archive: The archive attribute will be cached.
+        :param need_preferred_email: The preferred email attribute will be
+            cached.
+        :param need_validity: The is_valid attribute will be cached.
+        """
+        store = IStore(Person)
+        columns = [Person]
+        decorators = []
+        if need_karma:
+            # New people have no karmatotalcache rows.
+            origin.append(
+                LeftJoin(KarmaTotalCache,
+                    KarmaTotalCache.person == Person.id))
+            columns.append(KarmaTotalCache)
+        if need_ubuntu_coc:
+            columns.append(Alias(Exists(Select(SignedCodeOfConduct,
+                And(Person._is_ubuntu_coc_signer_condition(),
+                    SignedCodeOfConduct.ownerID == Person.id))),
+                name='is_ubuntu_coc_signer'))
+        if need_location:
+            # New people have no location rows
+            origin.append(
+                LeftJoin(PersonLocation,
+                    PersonLocation.person == Person.id))
+            columns.append(PersonLocation)
+        if need_archive:
+            # Not everyone has PPA's
+            # It would be nice to cleanly expose the soyuz rules for this to
+            # avoid duplicating the relationships.
+            origin.append(
+                LeftJoin(Archive, Archive.owner == Person.id))
+            columns.append(Archive)
+            conditions = And(conditions,
+                Or(Archive.id == None, And(
+                Archive.id == Select(Min(Archive.id),
+                    Archive.owner == Person.id, Archive),
+                Archive.purpose == ArchivePurpose.PPA)))
+        # checking validity requires having a preferred email.
+        if need_preferred_email and not need_validity:
+            # Teams don't have email, so a left join
+            origin.append(
+                LeftJoin(EmailAddress, EmailAddress.person == Person.id))
+            columns.append(EmailAddress)
+            conditions = And(conditions,
+                Or(EmailAddress.status == None,
+                    EmailAddress.status == EmailAddressStatus.PREFERRED))
+        if need_validity:
+            valid_stuff = Person._validity_queries()
+            origin.extend(valid_stuff["joins"])
+            columns.extend(valid_stuff["tables"])
+            decorators.extend(valid_stuff["decorators"])
+        if len(columns) == 1:
+            columns = columns[0]
+            # Return a simple ResultSet
+            return store.using(*origin).find(columns, conditions)
+        # Adapt the result into a cached Person.
+        columns = tuple(columns)
+        raw_result = store.using(*origin).find(columns, conditions)
+
+        def prepopulate_person(row):
+            result = row[0]
+            cache = get_property_cache(result)
+            index = 1
+            #-- karma caching
+            if need_karma:
+                karma = row[index]
+                index += 1
+                if karma is None:
+                    karma_total = 0
+                else:
+                    karma_total = karma.karma_total
+                cache.karma = karma_total
+            #-- ubuntu code of conduct signer status caching.
+            if need_ubuntu_coc:
+                signed = row[index]
+                index += 1
+                cache.is_ubuntu_coc_signer = signed
+            #-- location caching
+            if need_location:
+                location = row[index]
+                index += 1
+                cache.location = location
+            #-- archive caching
+            if need_archive:
+                archive = row[index]
+                index += 1
+                cache.archive = archive
+            #-- preferred email caching
+            if need_preferred_email and not need_validity:
+                email = row[index]
+                index += 1
+                cache.preferredemail = email
+            for decorator in decorators:
+                column = row[index]
+                index += 1
+                decorator(result, column)
+            return result
+        return DecoratedResultSet(raw_result,
+            result_decorator=prepopulate_person)
+
 
 # Provide a storm alias from Person to Owner. This is useful in queries on
 # objects that have more than just an owner associated with them.