← Back to team overview

cloud-init-dev team mailing list archive

Re: [Merge] ~powersj/cloud-init:cii-enable-ec2 into cloud-init:master

 

Thank you for a good text commit message that makes reading things easier.

* t2.micro for default instance type
  the smallest type that will work for both ebs(ssd) and instance-store
  from a quick look, m4.large (2 cpu / 8G) at $0.10/hr is the cheapest
  you can find something that runs on instance-store that exists
  in all regions (many do not have m3.medium).

  I guess at least just comment somewhere about ebs affecting instance types.

* boot_clean script: at some point it makes sense to use 'cloud-init clean'

* can you mention in your commmit message that you removed trailing 
  whitespace?  Just looking, its hard to see the difference, and I
  assumed that I was just missing it with my eyes. ie:
  -The integration testing suite can automatically build a deb based on the 
  +The integration testing suite can automatically build a deb based on the

* vpc creation. You create a single vpc with 'cloud-init-testing', but
  you create ephemeral ssh keys, and properly clean up afterwards.
  Is there any reason not to create ephemeral vpc and cleanup?

  After typing that I googled.  There is a default limit of 5 vpc per region.
    http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html

* what will happen with vpc (or other) resources multiple instances of the test harness are running at the same time?  All be ok?  Particularly since we're not re-creating vpc and gateway and such, it could be problematic.




Diff comments:

> diff --git a/doc/rtd/topics/tests.rst b/doc/rtd/topics/tests.rst
> index d668e3f..730df29 100644
> --- a/doc/rtd/topics/tests.rst
> +++ b/doc/rtd/topics/tests.rst
> @@ -392,6 +392,32 @@ Development Checklist
>             --test modules/your_test.yaml \
>             [--deb <build of cloud-init>]
>  
> +
> +Platforms
> +=========
> +
> +EC2
> +---
> +To run on the EC2 platform it is required that the user has an AWS credentials
> +configuration file specifying his or her access keys and a default region.
> +These configuration files are the standard that the AWS cli and other AWS
> +tools utilize for interacting directly with AWS itself and are normally
> +generated when running ``aws configure``:
> +
> +.. code-block:: bash
> +
> +    $ cat $HOME/.aws/config
> +    [default]
> +    aws_access_key_id = <KEY HERE>
> +    aws_secret_access_key = <KEY HERE>
> +

ultimately i think we should probably support having config in the test harness that gives these creds. rather than requiring the user to spread creds across their system in seemingly random
different files based on wherever the client we selected would look.

> +.. code-block:: bash
> +
> +    $ cat $HOME/.aws/config
> +    [default]
> +    region = us-west-2
> +
> +
>  Architecture
>  ============
>  
> diff --git a/tests/cloud_tests/platforms/ec2/image.py b/tests/cloud_tests/platforms/ec2/image.py
> new file mode 100644
> index 0000000..456dda2
> --- /dev/null
> +++ b/tests/cloud_tests/platforms/ec2/image.py
> @@ -0,0 +1,105 @@
> +# This file is part of cloud-init. See LICENSE file for license information.
> +
> +"""EC2 Image Base Class."""
> +
> +from ..images import Image
> +from .snapshot import EC2Snapshot
> +
> +
> +class EC2Image(Image):
> +    """EC2 backed image."""
> +
> +    platform_name = 'ec2'
> +
> +    def __init__(self, platform, config, image_ami):
> +        """Set up image.
> +
> +        @param platform: platform object
> +        @param config: image configuration
> +        @param image_ami: string of image ami ID
> +        """
> +        super(EC2Image, self).__init__(platform, config)
> +        self._img_instance = None
> +        self.image_ami = image_ami
> +        self.image_ami_edited = False
> +
> +    @property
> +    def _instance(self):
> +        """Internal use only, returns a running instance"""
> +        if not self._img_instance:
> +            self._img_instance = self.platform.create_instance(
> +                self.properties, self.config, self.features,
> +                self.image_ami, b'')
> +            self._img_instance.start(wait=True, wait_for_cloud_init=True)
> +            self.modified = True
> +        return self._img_instance
> +
> +    @property
> +    def properties(self):
> +        """Dictionary containing: 'arch', 'os', 'version', 'release'."""

Why do we change 'family' to 'os' 
wont that just be confusing ?

We really need to fix this, its very confusing.
Here we take the word 'family' which seems pretty strongly defined in
 tests/cloud_tests/util.py:OS_FAMILY_MAPPING
and name it 'os'.

And then 2 of 3 uses are identical (nocloudkvm and ec2), so maybe put that in the default?

> +        return {
> +            'arch': self.config['arch'],
> +            'os': self.config['family'],
> +            'release': self.config['release'],
> +            'version': self.config['version'],
> +        }
> +
> +    def destroy(self):
> +        """Unset path to signal image is no longer used.
> +
> +        The removal of the images and all other items is handled by the
> +        framework. In some cases we want to keep the images, so let the
> +        framework decide whether to keep or destroy everything.
> +        """
> +        if self.image_ami_edited:
> +            self.platform.ec2_client.deregister_image(
> +                ImageId=self.image_ami_edited
> +            )

Destroying an image should not destroy snapshots created from it.
We explicitly *fixed* that behavior in NoCloudKVM didn't we?

> +
> +        super(EC2Image, self).destroy()
> +
> +    def _execute(self, *args, **kwargs):
> +        """Execute command in image, modifying image."""
> +        return self._instance._execute(*args, **kwargs)
> +
> +    def push_file(self, local_path, remote_path):
> +        """Copy file at 'local_path' to instance at 'remote_path'."""
> +        return self._instance.push_file(local_path, remote_path)
> +
> +    def run_script(self, *args, **kwargs):
> +        """Run script in image, modifying image.
> +
> +        @return_value: script output
> +        """
> +        return self._instance.run_script(*args, **kwargs)
> +
> +    def snapshot(self):
> +        """Create snapshot of image, block until done.
> +
> +        Will return base image_ami if no instance has been booted, otherwise
> +        will run the clean script, shutdown the instance, create a custom

You either need to start the instance back up here, or track that you shut it down so that subsequent 'execute' can actually work.

ie:
  my_image.execute('apt-get install some-package')
  my_snapshot = my_image.snapshot()
  my_image.execute('apt-get install more-packages')
  more_pkg_snap = my_image.snapshot()

> +        AMI, and use that AMI once available.
> +        """
> +        if not self._img_instance:
> +            return EC2Snapshot(self.platform, self.properties, self.config,
> +                               self.features, self.image_ami)
> +
> +        if self.config.get('boot_clean_script'):
> +            self._img_instance.run_script(self.config.get('boot_clean_script'))
> +
> +        self._img_instance.shutdown(wait=True)
> +
> +        name = '%s-%s-%s' % (
> +            self.platform.tag, self.image_ami, self.platform.uuid
> +        )
> +        response = self.platform.ec2_client.create_image(
> +            Name=name, InstanceId=self._img_instance.instance.instance_id
> +        )
> +        self.image_ami_edited = response['ImageId']
> +        image = self.platform.ec2_resource.Image(self.image_ami_edited)
> +        self.platform.wait_for_state(image, 'available')
> +
> +        return EC2Snapshot(self.platform, self.properties, self.config,
> +                           self.features, self.image_ami_edited)
> +
> +# vi: ts=4 expandtab
> diff --git a/tests/cloud_tests/platforms/ec2/platform.py b/tests/cloud_tests/platforms/ec2/platform.py
> new file mode 100644
> index 0000000..55dfd0d
> --- /dev/null
> +++ b/tests/cloud_tests/platforms/ec2/platform.py
> @@ -0,0 +1,260 @@
> +# This file is part of cloud-init. See LICENSE file for license information.
> +
> +"""Base EC2 platform."""
> +import os
> +import time
> +import uuid
> +
> +import boto3
> +import botocore
> +
> +from ..platforms import Platform
> +from .image import EC2Image
> +from .instance import EC2Instance
> +from cloudinit import util as c_util
> +from tests.cloud_tests import util
> +
> +
> +class EC2Platform(Platform):
> +    """EC2 test platform."""
> +
> +    platform_name = 'ec2'
> +    ipv4_cidr = '192.168.1.0/20'
> +
> +    def __init__(self, config):
> +        """Set up platform."""
> +        super(EC2Platform, self).__init__(config)
> +        # Used with SSH key and custom AMI generation naming with EC2
> +        self.uuid = str(uuid.uuid1())[0:8]
> +        self.tag = config['tag']
> +
> +        self.ec2_client = boto3.client('ec2')
> +        self.ec2_resource = boto3.resource('ec2')
> +        self.ec2_region = boto3.Session().region_name
> +        self.instance_type = config['instance-type']
> +        self.key_name = self._upload_public_key(config)
> +        self.vpc = self._setup_vpc()
> +        self.vpc_security_group = self._get_security_group()
> +        self.vpc_subnet = self._get_subnet()
> +
> +    def create_instance(self, properties, config, features,
> +                        image_ami, user_data):
> +        """Create an instance
> +
> +        @param src_img_path: image path to launch from
> +        @param properties: image properties
> +        @param config: image configuration
> +        @param features: image features
> +        @param image_ami: string of image ami ID
> +        @param user_data: test user-data to pass to instance
> +        @return_value: cloud_tests.instances instance
> +        """
> +        return EC2Instance(self, properties, config, features,
> +                           image_ami, user_data)
> +
> +    def destroy(self):
> +        """Delete platform."""
> +        if self.key_name:
> +            self.ec2_client.delete_key_pair(KeyName=self.key_name)
> +
> +    def get_image(self, img_conf):
> +        """Get image using specified image configuration.
> +
> +        @param img_conf: configuration for image
> +        @return_value: cloud_tests.images instance
> +        """
> +        if img_conf['root-store'] == 'ebs':
> +            root_store = 'ssd'
> +        elif img_conf['root-store'] == 'instance-store':
> +            root_store = 'instance'
> +        else:
> +            raise RuntimeError('Unknown root-store type: %s' %
> +                               (img_conf['root-store']))
> +
> +        filters = [
> +            'arch=%s' % c_util.get_architecture(),
> +            'endpoint=https://ec2.%s.amazonaws.com' % self.ec2_region,
> +            'region=%s' % self.ec2_region,
> +            'release=%s' % img_conf['release'],
> +            'root_store=%s' % root_store,
> +            'virt=hvm',
> +        ]
> +
> +        image = self._query_streams(img_conf, filters)
> +
> +        try:
> +            image_ami = image['id']
> +        except KeyError:
> +            raise RuntimeError('No images found for %s!' % img_conf['release'])
> +
> +        image = EC2Image(self, img_conf, image_ami)
> +        return image
> +
> +    @staticmethod
> +    def wait_for_state(resource, goal_state):

can we use an enum  or somethign for 'goal_state'?

perhaps boto has something.  or at least we just define 'available' and such

class STATE(Enum)
    AVAILABLE =  'AVAILABLE'

> +        """Wait up to 5 minutes (5 * 60) for a specific state.
> +
> +        @param: resource to check with resource.state
> +        @param: string goal state to be in

http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#waiters

Can we use a waiter ? and avoid our loop?
https://stackoverflow.com/questions/40969984/how-to-use-boto3-waiters-to-take-snapshot-from-big-rds-instances

I'm not bent on that, but I stumbled upon it when looking for 'state' names

> +        """
> +        attempts = 0
> +
> +        while attempts < 60:
> +            try:
> +                resource.reload()
> +                current_state = resource.state
> +                # Sometimes get {'Code': 0, 'Name': 'pending'}
> +                if isinstance(current_state, dict):
> +                    current_state = current_state['Name']
> +
> +                if current_state == goal_state:
> +                    return
> +            # will occur if instance is not up yet
> +            except botocore.exception.BotoSeverError:
> +                pass
> +
> +            time.sleep(5)
> +            attempts += 1
> +
> +        raise util.InTargetExecuteError(

an InTargetExecuteError seems an odd error to raise for
  wait_for_state(my_snapshot, 'available')

> +            b'', b'', 1, goal_state, 'wait_for_state',
> +            reason='%s: took too long to get to %s' % (resource, goal_state)
> +        )
> +
> +    def _create_internet_gateway(self, vpc):
> +        """Create Internet Gateway and assign to VPC.
> +
> +        @param vpc: VPC to create internet gateway on
> +        """
> +        internet_gateway = self.ec2_resource.create_internet_gateway()
> +        internet_gateway.attach_to_vpc(VpcId=vpc.vpc_id)
> +        self._tag_resource(internet_gateway)
> +        return internet_gateway
> +
> +    def _create_subnet(self, vpc):
> +        """Generate IPv4 and IPv6 subnets for use.
> +
> +        @param vpc: VPC to create subnets on
> +        """
> +        ipv6_block = vpc.ipv6_cidr_block_association_set[0]['Ipv6CidrBlock']
> +        ipv6_cidr = ipv6_block[:-2] + '64'
> +
> +        subnet = vpc.create_subnet(CidrBlock=self.ipv4_cidr,
> +                                   Ipv6CidrBlock=ipv6_cidr)
> +        modify_subnet = subnet.meta.client.modify_subnet_attribute
> +        modify_subnet(SubnetId=subnet.id,
> +                      MapPublicIpOnLaunch={'Value': True})
> +        self._tag_resource(subnet)
> +
> +    def _get_security_group(self):
> +        """Return default security group from VPC.
> +
> +        For now there should only be one.
> +
> +        @return_value: Return first security group object
> +        """
> +        return list(self.vpc.security_groups.all())[0]
> +
> +    def _get_subnet(self):
> +        """Return first subnet from VPC.
> +
> +        For now there should only be one.
> +
> +        @return_value: Return subnet object
> +        """
> +        return list(self.vpc.subnets.all())[0]
> +
> +    def _setup_vpc(self):
> +        """Setup AWS EC2 VPC or return existing VPC."""
> +        for vpc in self.ec2_resource.vpcs.all():
> +            if not vpc.tags:
> +                continue
> +            for tag in vpc.tags:
> +                if tag['Value'] == self.tag:
> +                    return vpc
> +
> +        vpc = self.ec2_resource.create_vpc(
> +            CidrBlock=self.ipv4_cidr,
> +            AmazonProvidedIpv6CidrBlock=True
> +        )
> +        vpc.wait_until_available()
> +        self._tag_resource(vpc)
> +
> +        internet_gateway = self._create_internet_gateway(vpc)
> +        self._create_subnet(vpc)
> +        self._update_routing_table(vpc, internet_gateway.id)
> +        self._update_security_group(vpc)
> +
> +        return vpc
> +
> +    def _tag_resource(self, resource):
> +        """Tag a resouce with the specified tag.
> +
> +        This makes finding and deleting resources specific to this testing
> +        much easier to find.
> +
> +        @param resource: resource to tag"""
> +        tag = {
> +            'Key': 'Name',
> +            'Value': self.tag
> +        }
> +        resource.create_tags(Tags=[tag])
> +
> +    def _update_routing_table(self, vpc, internet_gateway_id):
> +        """Update default routing table with internet gateway.
> +
> +        This sets up internet access between the VPC via the internet gateway
> +        by configuring routing tables for IPv4 and IPv6.
> +
> +        @param vpc: VPC containing routing table to update
> +        @param internet_gateway_id: gateway ID to use for routing
> +        """
> +        route_table = list(vpc.route_tables.all())[0]
> +        route_table.create_route(DestinationCidrBlock='0.0.0.0/0',
> +                                 GatewayId=internet_gateway_id)
> +        route_table.create_route(DestinationIpv6CidrBlock='::/0',
> +                                 GatewayId=internet_gateway_id)
> +        self._tag_resource(route_table)
> +
> +    def _update_security_group(self, vpc):
> +        """Allow only SSH inbound to default VPC security group.
> +
> +        This revokes the initial accept all permissions and only allows
> +        the SSH inbound.
> +
> +        @param vpc: VPC containing security group to update
> +        """

I kind of think i'd just leave it wide open.  Other platforms are. i'm not set on this, but if our image is somehow vulnerable and gets exploited as a result, then its probably relevant to *all* ubuntu images (or whatever image we used as a reference).

Not a big deal here.

> +        ssh = {
> +            'IpProtocol': 'TCP',
> +            'FromPort': 22,
> +            'ToPort': 22,
> +            'IpRanges': [{'CidrIp': '0.0.0.0/0'}],
> +            'Ipv6Ranges': [{'CidrIpv6': '::/0'}]
> +        }
> +
> +        security_group = list(vpc.security_groups.all())[0]
> +        security_group.revoke_ingress(
> +            IpPermissions=security_group.ip_permissions
> +        )
> +        security_group.authorize_ingress(
> +            IpPermissions=[ssh]
> +        )
> +        self._tag_resource(security_group)
> +
> +    def _upload_public_key(self, config):
> +        """Generate random name and upload SSH key with that name.
> +
> +        @param config: platform config
> +        @return: string of ssh key name
> +        """
> +        key_file = os.path.join(config['data_dir'], config['public_key'])
> +        with open(key_file, 'r') as file:
> +            public_key = file.read().strip('\n')
> +
> +        name = '%s-%s' % (self.tag, self.uuid)

so 'tag' is just 'cloud-init-testing'
adn uuid is just a uuid.  so if the user is looking at a bunch of 'cloud-init-testing' keys, they'll have no idea which is which.  (assuming the keys got lost / not cleanedup).

i think some part of the harness generates a name that has a timestamp in it or someething.
ie, if it were: cloud-init-testing-20171211-120111
then if i were lookign at keys i could at least know the old things are trash.

> +        self.ec2_client.import_key_pair(KeyName=name,
> +                                        PublicKeyMaterial=public_key)
> +
> +        return name
> +
> +# vi: ts=4 expandtab
> diff --git a/tests/cloud_tests/platforms/ec2/snapshot.py b/tests/cloud_tests/platforms/ec2/snapshot.py
> new file mode 100644
> index 0000000..a51c951
> --- /dev/null
> +++ b/tests/cloud_tests/platforms/ec2/snapshot.py
> @@ -0,0 +1,51 @@
> +# This file is part of cloud-init. See LICENSE file for license information.
> +
> +"""Base EC2 snapshot."""
> +
> +from ..snapshots import Snapshot
> +
> +
> +class EC2Snapshot(Snapshot):
> +    """EC2 image copy backed snapshot."""
> +
> +    platform_name = 'ec2'
> +
> +    def __init__(self, platform, properties, config, features, image_ami):
> +        """Set up snapshot.
> +
> +        @param platform: platform object
> +        @param properties: image properties
> +        @param config: image config
> +        @param features: supported feature flags
> +        @param image_ami: string of image ami ID
> +        """
> +        super(EC2Snapshot, self).__init__(
> +            platform, properties, config, features)
> +
> +        self.image_ami = image_ami
> +
> +    def destroy(self):
> +        """Clean up snapshot data."""
> +        super(EC2Snapshot, self).destroy()
> +
> +    def launch(self, user_data, meta_data=None, block=True, start=True,
> +               use_desc=None):
> +        """Launch instance.
> +
> +        @param user_data: user-data for the instance
> +        @param instance_id: instance-id for the instance

remove 'instance-id'
add 'meta_data' to docstring.

> +        @param block: wait until instance is created
> +        @param start: start instance and wait until fully started
> +        @param use_desc: string of test name
> +        @return_value: an Instance

can we just raise an exception on metadata that is not None?
if meta_data is not None:
    raise ValueError("metadata not supported on Ec2")

> +        """
> +        instance = self.platform.create_instance(
> +            self.properties, self.config, self.features,
> +            self.image_ami, user_data)
> +
> +        if start:
> +            instance.start()
> +
> +        return instance
> +
> +# vi: ts=4 expandtab
> diff --git a/tests/cloud_tests/platforms/instances.py b/tests/cloud_tests/platforms/instances.py
> index 8c59d62..6918003 100644
> --- a/tests/cloud_tests/platforms/instances.py
> +++ b/tests/cloud_tests/platforms/instances.py
> @@ -49,6 +61,58 @@ class Instance(TargetBase):
>          """Clean up instance."""
>          pass
>  
> +    def _ssh(self, command, stdin=None):
> +        """Run a command via SSH."""
> +        client = self._ssh_connect()
> +
> +        cmd = util.shell_pack(command)
> +        try:
> +            fp_in, fp_out, fp_err = client.exec_command(cmd)
> +            channel = fp_in.channel
> +
> +            if stdin is not None:
> +                fp_in.write(stdin)
> +                fp_in.close()
> +
> +            channel.shutdown_write()
> +            rc = channel.recv_exit_status()
> +        except SSHException as e:
> +            raise util.InTargetExecuteError(b'', b'', 1, command, 'ssh',
> +                                            reason=e)
> +
> +        return (fp_out.read(), fp_err.read(), rc)
> +
> +    def _ssh_connect(self):
> +        """Connect via SSH."""
> +        if self._ssh_client:
> +            return self._ssh_client
> +
> +        if not self.ssh_ip or not self.ssh_port:
> +            raise ValueError
> +
> +        client = paramiko.SSHClient()
> +        client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
> +        private_key = paramiko.RSAKey.from_private_key_file(self.ssh_key_file)
> +
> +        retries = 30
> +        while retries:
> +            try:
> +                client.connect(username='ubuntu', hostname=self.ssh_ip,
> +                               port=self.ssh_port, pkey=private_key,
> +                               banner_timeout=30)
> +                self._ssh_client = client
> +                return client
> +            except (ConnectionRefusedError, AuthenticationException,
> +                    BadHostKeyException, ConnectionResetError, SSHException,
> +                    OSError) as e:
> +                retries -= 1
> +                time.sleep(10)
> +
> +        ssh_cmd = 'Failed command to: ubuntu@%s:%s after 300 seconds' % (
> +            self.ssh_ip, self.ssh_port
> +        )

- Failed command to
- Failled ssh connection to

> +        raise util.InTargetExecuteError(b'', b'', 1, ssh_cmd, 'ssh')

Is there a reason to not just raise 'e' here ?
the last exception raised in the loop above?

oh... code motion only.  You just moved this around.
not sure what to do, but as you made the _ssh_connect more generic,
raising the specific InTargetExecuteError became less valid.

Can you think of a reason to not raise the exception that got caught?

> +
>      def _wait_for_system(self, wait_for_cloud_init):
>          """Wait until system has fully booted and cloud-init has finished.
>  
> diff --git a/tests/cloud_tests/platforms/platforms.py b/tests/cloud_tests/platforms/platforms.py
> index 2897536..aa88d53 100644
> --- a/tests/cloud_tests/platforms/platforms.py
> +++ b/tests/cloud_tests/platforms/platforms.py
> @@ -24,4 +31,63 @@ class Platform(object):
>          """Clean up platform data."""
>          pass
>  
> +    def _generate_ssh_keys(self, data_dir):
> +        """Generate SSH keys to be used with image."""
> +        filename = os.path.join(data_dir, 'id_rsa')
> +
> +        if os.path.exists(filename):
> +            c_util.del_file(filename)
> +
> +        c_util.subp(['ssh-keygen', '-t', 'rsa', '-b', '4096',
> +                     '-f', filename, '-P', '',
> +                     '-C', 'ubuntu@cloud_test'],
> +                    capture=True)
> +
> +    @staticmethod
> +    def _query_streams(img_conf, img_filter):
> +        """Query streams for latest image given a specific filter.
> +
> +        @param img_conf: configuration for image
> +        @param filters: array of filters as strings format 'key=value'
> +        @return: dictionary with latest image information or empty
> +        """
> +        def policy(content, path):
> +            return s_util.read_signed(content, keyring=img_conf['keyring'])
> +
> +        (url, path) = s_util.path_from_mirror_url(img_conf['mirror_url'], None)
> +        smirror = mirrors.UrlMirrorReader(url, policy=policy)
> +
> +        config = {'max_items': 1, 'filters': filters.get_filters(img_filter)}
> +        tmirror = FilterMirror(config)
> +        tmirror.sync(smirror, path)

good job here... we really do need to expose a simple "query" interface in simplestreams.
as opposed to the mirror and filter stuff.

> +
> +        return tmirror.json_entries[0]
> +
> +
> +class FilterMirror(mirrors.BasicMirrorWriter):
> +    """Taken from sstream-query to return query result as json array."""
> +
> +    def __init__(self, config=None):
> +        super(FilterMirror, self).__init__(config=config)
> +        if config is None:
> +            config = {}
> +        self.config = config
> +        self.filters = config.get('filters', [])
> +        self.json_entries = []
> +
> +    def load_products(self, path=None, content_id=None):
> +        return {'content_id': content_id, 'products': {}}
> +
> +    def filter_item(self, data, src, target, pedigree):
> +        return filters.filter_item(self.filters, data, src, pedigree)
> +
> +    def insert_item(self, data, src, target, pedigree, contentsource):
> +        # src and target are top level products:1.0
> +        # data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]]
> +        # contentsource is a ContentSource if 'path' exists in data or None
> +        data = s_util.products_exdata(src, pedigree)
> +        if 'path' in data:
> +            data.update({'item_url': contentsource.url})
> +        self.json_entries.append(data)
> +
>  # vi: ts=4 expandtab
> diff --git a/tests/cloud_tests/testcases/modules/apt_configure_sources_list.py b/tests/cloud_tests/testcases/modules/apt_configure_sources_list.py
> index 129d226..1c94aee 100644
> --- a/tests/cloud_tests/testcases/modules/apt_configure_sources_list.py
> +++ b/tests/cloud_tests/testcases/modules/apt_configure_sources_list.py
> @@ -9,18 +9,15 @@ class TestAptconfigureSourcesList(base.CloudTestCase):
>  
>      def test_sources_list(self):
>          """Test sources.list includes sources."""
> +        url_archive = r'http:\/\/.*archive\.ubuntu\.com\/ubuntu\/? [a-z].*'
> +        url_security = r'http:\/\/.*security\.ubuntu\.com\/ubuntu\/? [a-z].*'
> +

this isnt great, because in no way do we actually test that the mirror is the right mirror.

we could change the config to define the mirror, and that would override the auto selection of ec2,a nd then we could just assert that the mirror is what we expect.

this test is pretty bad really.  we get the sources.list file, and then go through and assert that certain things are in it based on what we put in, but we dont check that nothing else was added.

again, not your fault.

If we're not going to check that we got the right mirror on the right platform, then we might as well just declare the mirrors.  that way we can test that we *can* declare them.

it is supposed to look something like this:

apt:
  primary:
    - arches: [default]
      uri: http://us.archive.ubuntu.com/ubuntu
  security:
    - arches: [default]
      url: http://security.ubuntu.com/ubuntu

>          out = self.get_data_file('sources.list')
> -        self.assertRegex(out, r'deb http:\/\/archive.ubuntu.com\/ubuntu '
> -                         '[a-z].* main restricted')
> -        self.assertRegex(out, r'deb-src http:\/\/archive.ubuntu.com\/ubuntu '
> -                         '[a-z].* main restricted')
> -        self.assertRegex(out, r'deb http:\/\/archive.ubuntu.com\/ubuntu '
> -                         '[a-z].* universe restricted')
> -        self.assertRegex(out, r'deb-src http:\/\/archive.ubuntu.com\/ubuntu '
> -                         '[a-z].* universe restricted')
> -        self.assertRegex(out, r'deb http:\/\/security.ubuntu.com\/ubuntu '
> -                         '[a-z].*security multiverse')
> -        self.assertRegex(out, r'deb-src http:\/\/security.ubuntu.com\/ubuntu '
> -                         '[a-z].*security multiverse')
> +        self.assertRegex(out, r'deb %s main restricted' % url_archive)
> +        self.assertRegex(out, r'deb-src %s main restricted' % url_archive)
> +        self.assertRegex(out, r'deb %s universe restricted' % url_archive)
> +        self.assertRegex(out, r'deb-src %s universe restricted' % url_archive)
> +        self.assertRegex(out, r'deb %ssecurity multiverse' % url_security)
> +        self.assertRegex(out, r'deb-src %ssecurity multiverse' % url_security)
>  
>  # vi: ts=4 expandtab


-- 
https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/335186
Your team cloud-init commiters is requested to review the proposed merge of ~powersj/cloud-init:cii-enable-ec2 into cloud-init:master.


References