openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #23811
Re: Share Glance between cells or regions
Hi Jay,
We are doing something similar. We have a single glance registry which is backed by galera DB replication.
Then we have multiple glance-apis around the place.
Currently they are all backed onto the same swift but I'd like to have it so each glance-api can talk to it's own swift.
The issue I see is that the location of the image as stored in the glance-registry is a keystone url.
So yes you could get a glance api to store data in a specific swift region (using the swift_store_region) but it has no way of knowing which region to pull an image out of.
I think the location value stored when using swift needs to be the swift URL or else it needs to store the region in the DB too.
Have you thought about this? Have a solution?
Cheers,
Sam
On 16/05/2013, at 6:49 AM, Jay Pipes <jaypipes@xxxxxxxxx> wrote:
> On 05/15/2013 02:46 PM, John Paul Walters wrote:
>> Hi,
>>
>> We're looking at setting up a geographically distributed OpenStack installation, and we're considering either cells or regions. We'd like to share a single Glance install between our regions (or cells), so the same images can be spawned anywhere. From here:
>>
>> http://docs.openstack.org/trunk/openstack-ops/content/segregate_cloud.html
>>
>> it's not clear whether that's possible. Can anyone shed some light on this? Is it possible in regions OR cells (or both)? Is there a better solution that I'm not thinking of?
>
> We will be sharing both the Keystone identity (note: not token/catalog)
> and Glance registry databases in a synchronously-replicated Galera MySQL
> cluster. Databases like the above, which have extremely low write to
> read ratios are ideal for this kind of replication. We are replicating
> working sets over the WAN using rsync replication in the WSREP
> clustering software.
>
> What this enables us to do is have a single set of account records and a
> single set of image (base and snapshot) records. Note that we back
> Glance in each zone with a zone-local Swift cluster. But what this
> allows us to do is have a user in zone A make a snapshot and then
> immediately (once the snapshot goes from the SAVING state to ACTIVE),
> the user is able to launch their snapshot in zone B. The Glance registry
> database has the location of the snapshot in zone A's Swift cluster and
> when Nova in zone B launches the image, the Glance API server in zone B
> simply pulls the image bits from Swift in zone A.
>
> Best,
> -jay
>
> p.s. I say "will be sharing" because we are currently updating our
> deployment to use this single Glance registry database. Originally we
> went down the route of each zone having its own Glance registry database
> and realized that since the pattern of write activity to the Glance
> registry is so low, it made sense to replicate it across our zones and
> give the users the ability to launch instances from snapshots in any
> zone. The single identity database is already in use across our zones.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
Follow ups
References