← Back to team overview

launchpad-reviewers team mailing list archive

[Merge] lp:~jtv/maas/import-from-configured-mirror into lp:maas

 

Jeroen T. Vermeulen has proposed merging lp:~jtv/maas/import-from-configured-mirror into lp:maas.

Commit message:
Tell the import scripts, when run as a celery task, what archives to use.

Requested reviews:
  Launchpad code reviewers (launchpad-reviewers)

For more details, see:
https://code.launchpad.net/~jtv/maas/import-from-configured-mirror/+merge/133505

This ties in with Raphaël's ongoing work: he's changing the UI to manage these same settings.  I cribbed the settings' names from lp:~rvb/maas/mirror-settings-re-2

Archive locations are passed to the script through environment variables.  The scripts already obeys these variables.  The impact on the scripts themselves is altogether quite minimal, but one name in maas-import-ephemerals needed changing for uniformity.  I added and documented some compatibility code.

URLs that have not been configured yet fall back on the defaults that are encoded in the import scripts.  Missing archive locations are not passed through.

Tests pass for this branch even though it does not incorporate Raphaël's branches, which haven't landed yet.  That's because my tests configure the archive settings themselves.  In a separate branch, which relies on the synthesis of this branch and Raphaël's, I will add an integration test to establish that the two sets of changes operate on the same configuration items.

My test code repeats make_archive_url.  It might be worth abstracting a single make_url somewhere that's available to all apps.  But then the sensible step would be to sanitize all code that constructs URLs.  A job best left to a separate branch!


Jeroen
-- 
https://code.launchpad.net/~jtv/maas/import-from-configured-mirror/+merge/133505
Your team Launchpad code reviewers is requested to review the proposed merge of lp:~jtv/maas/import-from-configured-mirror into lp:maas.
=== modified file 'scripts/maas-import-ephemerals'
--- scripts/maas-import-ephemerals	2012-11-08 06:34:48 +0000
+++ scripts/maas-import-ephemerals	2012-11-08 16:01:46 +0000
@@ -20,7 +20,10 @@
 # along with this program.  If not, see <http://www.gnu.org/licenses/>.
 
 VERBOSITY=0
-REMOTE_IMAGES_MIRROR=${REMOTE_IMAGES_MIRROR:-https://maas.ubuntu.com/images}
+
+# Mirror to load cloud images from.  When the cluster controller runs the
+# import scripts, it provides a setting from the server side.
+CLOUD_IMAGES_ARCHIVE=${CLOUD_IMAGES_ARCHIVE:-https://maas.ubuntu.com/images}
 
 # iSCSI targets configuration file.
 SYS_TGT_CONF="/etc/tgt/targets.conf"
@@ -89,10 +92,10 @@
 
 
 query_remote() {
-    # query /query data at REMOTE_IMAGES_MIRROR
+    # query /query data at CLOUD_IMAGES_ARCHIVE
     # returns 7 values prefixed with 'r_'
     local iarch=$1 irelease=$2 istream=$3 out=""
-    local burl="${REMOTE_IMAGES_MIRROR}/query"
+    local burl="${CLOUD_IMAGES_ARCHIVE}/query"
     local url="$burl/$irelease/$istream/${STREAM}-dl.current.txt"
     local target="$TEMP_D/query/$release.$stream"
     mkdir -p -- "$TEMP_D/query"
@@ -168,7 +171,7 @@
     local wd="$1" exdir="" tarball=""
     shift
     local release=$1 stream=$2 label=$3 serial=$4 arch=$5 url=$6 name=$7
-    local furl="$REMOTE_IMAGES_MIRROR/$url"
+    local furl="$CLOUD_IMAGES_ARCHIVE/$url"
 
     mkdir -p "$wd"
     cat > "$wd/info" <<EOF
@@ -365,7 +368,7 @@
         query_local "$arch" "$release" "$BUILD_NAME" ||
             fail "failed to query local for $release/$arch"
         query_remote "$arch" "$release" "$BUILD_NAME" ||
-            fail "remote query of $REMOTE_IMAGES_MIRROR failed"
+            fail "remote query of $CLOUD_IMAGES_ARCHIVE failed"
 
         info="rel: $r_release, arch: $arch: name: $r_name"
         debug 2 "$info"

=== modified file 'scripts/maas-import-pxe-files'
--- scripts/maas-import-pxe-files	2012-11-08 06:34:48 +0000
+++ scripts/maas-import-pxe-files	2012-11-08 16:01:46 +0000
@@ -20,7 +20,8 @@
 local_settings="$(pwd)/$settings"
 [ -r $local_settings ] && . $local_settings
 
-# Download locations for Ubuntu releases.
+# Download locations for Ubuntu releases.  When the cluster controller runs
+# the import scripts, it provides settings from the server side.
 MAIN_ARCHIVE=${MAIN_ARCHIVE:-http://archive.ubuntu.com/ubuntu/}
 PORTS_ARCHIVE=${PORTS_ARCHIVE:-http://ports.ubuntu.com/ubuntu-ports/}
 

=== modified file 'src/maasserver/models/nodegroup.py'
--- src/maasserver/models/nodegroup.py	2012-11-08 06:34:48 +0000
+++ src/maasserver/models/nodegroup.py	2012-11-08 16:01:46 +0000
@@ -239,7 +239,19 @@
         """
         # Avoid circular imports.
         from maasserver.models import Config
-        task_kwargs = dict(http_proxy=Config.objects.get_config('http_proxy'))
+        config_parameters = {
+            'http_proxy',
+            'main_archive',
+            'ports_archive',
+            'cloud_images_archive',
+        }
+        config = {
+            name: Config.objects.get_config(name)
+            for name in config_parameters}
+        task_kwargs = {
+            name: value
+            for name, value in config.items()
+                if value is not None}
         import_boot_images.apply_async(queue=self.uuid, kwargs=task_kwargs)
 
     def add_dhcp_host_maps(self, new_leases):

=== modified file 'src/maasserver/tests/test_nodegroup.py'
--- src/maasserver/tests/test_nodegroup.py	2012-11-07 10:39:19 +0000
+++ src/maasserver/tests/test_nodegroup.py	2012-11-08 16:01:46 +0000
@@ -265,6 +265,14 @@
         self.assertItemsEqual(calls, recorder.apply_async.call_args_list)
 
 
+def make_archive_url(name):
+    """Create a fake archive URL."""
+    return "http://%s.example.com/%s/"; % (
+        factory.make_name(name),
+        factory.make_name('path'),
+        )
+
+
 class TestNodeGroup(TestCase):
 
     resources = (
@@ -373,6 +381,27 @@
         recorder.assert_called_once_with(
             ['sudo', '-n', 'maas-import-pxe-files'], env=expected_env)
 
+    def test_import_boot_images_selects_archive_locations_from_config(self):
+        recorder = self.patch(nodegroup_module, 'import_boot_images')
+        nodegroup = factory.make_node_group(status=NODEGROUP_STATUS.ACCEPTED)
+
+        archives = {
+            'main_archive': make_archive_url('main'),
+            'ports_archive': make_archive_url('ports'),
+            'cloud_images_archive': make_archive_url('cloud_images'),
+        }
+        for key, value in archives.items():
+            Config.objects.set_config(key, value)
+
+        nodegroup.import_boot_images()
+
+        kwargs = recorder.apply_async.call_args[1]['kwargs']
+        archive_options = {
+            arg: value
+            for arg, value in kwargs.items()
+                if arg.endswith('_archive')}
+        self.assertEqual(archives, archive_options)
+
     def test_import_boot_images_sent_to_nodegroup_queue(self):
         recorder = self.patch(nodegroup_module, 'import_boot_images', Mock())
         nodegroup = factory.make_node_group()

=== modified file 'src/maasserver/tests/test_views_settings.py'
--- src/maasserver/tests/test_views_settings.py	2012-11-08 10:40:48 +0000
+++ src/maasserver/tests/test_views_settings.py	2012-11-08 16:01:46 +0000
@@ -38,7 +38,10 @@
     AdminLoggedInTestCase,
     LoggedInTestCase,
     )
-from mock import call
+from mock import (
+    ANY,
+    call,
+    )
 
 
 class SettingsTest(AdminLoggedInTestCase):
@@ -248,7 +251,7 @@
             reverse('settings'), {'import_all_boot_images': 1})
         self.assertEqual(httplib.FOUND, response.status_code)
         calls = [
-           call(queue=nodegroup.work_queue, kwargs={'http_proxy': None})
+           call(queue=nodegroup.work_queue, kwargs=ANY)
            for nodegroup in accepted_nodegroups
         ]
         self.assertItemsEqual(calls, recorder.apply_async.call_args_list)

=== modified file 'src/provisioningserver/tasks.py'
--- src/provisioningserver/tasks.py	2012-11-08 06:34:48 +0000
+++ src/provisioningserver/tasks.py	2012-11-08 16:01:46 +0000
@@ -376,9 +376,16 @@
 # =====================================================================
 
 @task
-def import_boot_images(http_proxy=None):
+def import_boot_images(http_proxy=None, main_archive=None, ports_archive=None,
+                       cloud_images_archive=None):
     env = dict(os.environ)
     if http_proxy is not None:
         env['http_proxy'] = http_proxy
         env['https_proxy'] = http_proxy
+    if main_archive is not None:
+        env['MAIN_ARCHIVE'] = main_archive
+    if ports_archive is not None:
+        env['PORTS_ARCHIVE'] = ports_archive
+    if cloud_images_archive is not None:
+        env['CLOUD_IMAGES_ARCHIVE'] = cloud_images_archive
     check_call(['sudo', '-n', 'maas-import-pxe-files'], env=env)

=== modified file 'src/provisioningserver/tests/test_tasks.py'
--- src/provisioningserver/tests/test_tasks.py	2012-11-08 06:34:48 +0000
+++ src/provisioningserver/tests/test_tasks.py	2012-11-08 16:01:46 +0000
@@ -540,6 +540,11 @@
 
 class TestImportPxeFiles(PservTestCase):
 
+    def make_archive_url(self, name=None):
+        if name is None:
+            name = factory.make_name('archive')
+        return 'http://%s.example.com/%s' % (name, factory.make_name('path'))
+
     def test_import_boot_images(self):
         recorder = self.patch(tasks, 'check_call', Mock())
         import_boot_images()
@@ -560,3 +565,21 @@
         expected_env = dict(os.environ, http_proxy=proxy, https_proxy=proxy)
         recorder.assert_called_once_with(
             ['sudo', '-n', 'maas-import-pxe-files'], env=expected_env)
+
+    def test_import_boot_images_sets_archive_locations(self):
+        self.patch(tasks, 'check_call')
+        archives = {
+            'main_archive': self.make_archive_url('main'),
+            'ports_archive': self.make_archive_url('ports'),
+            'cloud_images_archive': self.make_archive_url('cloud-images'),
+        }
+        expected_settings = {
+            parameter.upper(): value
+            for parameter, value in archives.items()}
+        import_boot_images(**archives)
+        env = tasks.check_call.call_args[1]['env']
+        archive_settings = {
+            variable: value
+            for variable, value in env.iteritems()
+                if variable.endswith('_ARCHIVE')}
+        self.assertEqual(expected_settings, archive_settings)