openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #13657
Re: [nova][ec2] EC2 CreateImage API and nova boot-from-volume
On Mon, 25 Jun 2012, Vishvananda Ishaya wrote:
>
> On Jun 25, 2012, at 9:25 AM, Eoghan Glynn wrote:
>
> >
> > Hi Folks,
> >
> > I've been looking into the (currently broken) EC2 CreateImage API support
> > and just wanted to get a sanity check on the following line of reasoning:
> >
> > - EC2 CreateImage should *only* apply to booted-from-volume nova servers,
> > for fidelity with the EC2 limitation to EBS-based instances (i.e. should
> > raise InvalidParameterValue when the API is applied to an instance-store
> > style server).
> >
> > So my previous crack at this https://review.openstack.org/8532
> > was going completely in the wrong direction.
> >
> > - Normally, a snapshot of a bootable-volume is booted via the native API
> > tooling with something like:
> >
> > nova boot --image IMAGE_ID --block-device-mapping vd[a-z]=SNAP_ID:snap::0 ...
> >
> > where AFAICS the IMAGE_ID is *only* used to determine the kernel and
> > ramdisk IDs and is otherwise irrelevant.
> >
> > - The EC2 CreateImage on the other hand requires that a usable image ID
> > be returned, not set of a volume snapshot IDs.
> >
> > - The resulting image should be bootable via EC2 RunInstances, but doesn't
> > necessarily need to be portable, as it depends on local snapshot ID(s).
> >
> > Here a few different potential approaches to the creation of this image:
> >
> > 1. Create a "place-holder" image in glance with the image data being
> > effectively empty, and the following properties set:
> >
> > * the imaged instance's kernel and ramdisk IDs
> > * block device mapping containing the appropriate snapshot ID(s)
> >
> > so that we can boot from this image without providing additional
> > context (such as the via nova boot --block-device-mapping option)
> >
>
> This looks like the best option to me, since we are already storing bdm information in
> images already.
This feels to me like how it is implemented in EC2. Essentially the
when you RegisterImage with a snapshot rather than a manifest, you
basically get an image-id that refers to a block-device mapping
referencing a snapshot.
I guess more indirectly the same is true about "instance store images".
Ie,
instance-store ami:
* root-device: S3://some-bucket/some.manifest
* root-device-type: instance-store
* name
* kernel, ramdisk
...
ebsoroot ami:
* root-device: snapshot:snap-abcdefg
* root-device-type: ebs
* name
* kernel, ramdisk
...
The run-instance code then just knows that if a block device mapping
references an S3 manifest, it needs to populate a "instance-store" root
disk with the content from the manifest.
Do we use the S3 manifest for anything other than the initial copy to
glance?
> > 2. Extend the s3_image mapping logic such that an ami-* style ID can be
> > mapped directly to a set of properties, snapshot IDs etc (so there
> > would no image registration with glance).
>
> Rather not have to keep track of this stuff in the mapping layer, but we may end up
> having to keep more data there eventually.
Or, turn the s3_image mapping into something that fits with the
volume-based root.
> > - How should the lifecycle of the image and the corresponding
> > snapshot(s) be intertwined? (e.g. should a deletion of the snapshot
> > cause the corresponding image to be also deleted or marked
> > unusable?)
>
> That sounds a bit complicated. We may just have to fail when the image is launched
Just for reference, against EC2:
$ euca-register --snapshot snap-3324e74d --name "smoser.foo"
IMAGE ami-c0c865a9
$ euca-delete-snapshot snap-3324e74d
InvalidSnapshot.InUse: The snapshot snap-3324e74d is currently in use by ami-c0c865a9
So what happens on EC2 (which would also seem reasonably simple to
implement) is that the request to delete the snapshot fails if there is an
image with that snapshot registered.
> > - Would a corresponding feature for the native API make sense?
> > i.e. an equivalent of the nova createImage action that does not
> > depend on the qemu-img command, but instead works for booted-from-vol
> > servers. This would avoid the counter-intuitive use of an imageRef
> > that's only used to grab the kernel and ramdisk IDs.
>
> I think this makes sense, but I think we should remove the requirement for imageRef
> regardless.
>
> This whole cluster is due to the organic nature of aws moving from regular images
> to ebs based images and gradually adding options. We are essentially having
> to map to a badly designed api, but I think your approach is reasonable.
References