← Back to team overview

openstack team mailing list archive

Glance/Nova Snapshot Changes


In developing Nova's instance snapshots, I've run into a little snag
revolving around somed design decisions in both Nova and Glance.  I have a
plan that I'd like to move forward with ASAP, so please, if you see any
problems or have any objections, raise them now.


OpenStack's API divides calls into a "top-half" which returns quickly
(compute/api.py) and a fire-and-forget "bottom-half" (compute/manager.py)
which is long-running.

The problems is that the image-create OpenStack API call (which maps to
instance-snapshot) requires the image-metadata (id, name, status, etc) be
returned up front. The current implementation, however, doesn't create the
image in Glance until the snapshot is actually being uploaded which happens
well after the OpenStack API "top-half" has returned.

(* Aside: To facilitate caching, Glance treats image data as immutable, that
is one reason we wanted to hold off on creating the Image record in Glance
until the data was actually present. *)

Since we cannot change the OpenStack API (yet), we'll need to modify both Nova and
Glance to allow Images to exist *before* their data is available.

Proposed Solution

Here is my suggestion:

  * Glance images are given a status of 'queued', 'saving', 'active', or
    'killed'. This field already exists-- I'm noting the statuses here because
    we're going to start using them to enforce state. (Note: CloudServers has
    a 'preparing' state which is no longer needed in the Nova-Glance

  * We modify Glance to allow the client to omit the image data on
    image-create (aka POST).  When Glance detects this, it creates a record
    for the image in the registry, puts it into the 'queued' state, and then
    returns the image metadata.

  * We modify Glance to allow data to be uploaded in a subsequent PUT request.
    While the data is being uploaded, the image will be in the 'saving' state;
    if the operation completes sucessfully, it will go to 'active', otherwise,
    it will go to 'killed'. Note, since we have an immutability constraint on
    images, we should not allow image data to be updated once it exists, so,
    once the image goes 'active', subsequent PUT requests should fail with a
    409 Conflict (or something similar). Also note, this is at odds somewhat
    with ReSTful semantics since a PUT in this case (when image data is
    present in the request body) is no longer idempotent. If we can think of
    a better way, I'm all ears, however, for now I think the tradeoff is worth

  * We modify OpenStack API image-create "top-half" to call
    ImageService.create which will POST to Glance without the image data. We
    will then return the image-metadata (with the image in a 'queued' state)
    to the callee. The "top-half" will then pass the "image_id" to the
    "bottom-half" which can upload the data into the specified image.

  * Modify Glance XenServer plugin to accept the image_id as an argument and
    to PUT to that resource.

This is the bare-minimum that will need to change for now, down the road,
we'll need to consider:

  * Creating a monitor task that timeouts images that hang out in the 'queued'
    and 'saving' state for too long, and a task that deletes images in the
    'killed' status.

Also, Glance now has Python bindings in the form of its 'client.py'. As part
of this effort, I'm planning on modifying ImageService::Glance to use the new
Glance client. This will mean that Nova will require that Glance be installed
on the same machine running nova-api in order to use the Glance service.



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is prohibited.
If you receive this transmission in error, please notify us immediately by e-mail
at abuse@xxxxxxxxxxxxx, and delete the original message. 
Your cooperation is appreciated.

Follow ups