← Back to team overview

cf-charmers team mailing list archive

Re: CF-Sprint Plans and Directions

 

The first email gave some additional information but the jist is that is
what we are working on now :) Its a new part as we didn't previously have a
generator. The change is means more work today and we hope (much) less work
later.


On Tue, Jun 17, 2014 at 4:39 PM, Antonio Rosales <
antonio.rosales@xxxxxxxxxxxxx> wrote:

> Thanks for providing this info and keeping us updated on the progress.
> One question in-line below.
>
> On Tue, Jun 17, 2014 at 5:23 PM, Benjamin Saller
> <benjamin.saller@xxxxxxxxxxxxx> wrote:
> > Some additional info on the plan to generate charms from metadata in the
> > bundle. What follows is pseudo-code that will capture an example service
> > (akin the the existing hooks/config.py service block) and a new structure
> > that attempts to model a series of releases and the transitions between
> > them.
>
> What is the mechanism to parse the below service block example into a
> Juju bundle deployment?
>
> -thanks,
> Antonio
>
> >
> > Example Service Block (pseudo-code)
> >
> > cloud_controller_v1 = Service({
> >     "jobs": [
> >         Job({
> >             'name': 'optional',
> >             'service': 'cf-cloudcontroller-ng',
> >             "provided_data": [contexts.CloudController],
> >             'required_data': [contexts.NatsRelation,
> >                               contexts.RouterRelation,
> >                               contexts.MysqlRelation,
> >                               contexts.BundleConfig,
> >                               ],
> >             'data_ready': [ job_templates, db_migrate, ],
> >     }),
> >     ]
> > })
> >
> > This is the higher level list of releases, its the series of ranges that
> > contain a particular topology. We use this to generate charms for a given
> > release anywhere in a given range. If we are migrating to a newer release
> > (by setting the target revision config settings of the CF brain)  we will
> > run any upgrade hooks (which are just examples). These things are do
> > topology changes and leader elected migrations and so on.
> >
> >
> > RELEASES = [
> >     {
> >         "releases": (171, 172),
> >         "topology": {
> >             "services": [cloud_controller_v1, cc_clock_v1,
> > "cs:trusty/mysql"],
> >             "expose": [router_v1],
> >             "relations": [
> >                 ((cloud_controller_v1, 'nats') , (nats_v1, 'nats')),
> >                 ((cloud_controller_v1, 'clock'), (cc_clock_v1, 'clock'))
> ]
> >         },
> >         "upgrades": {
> >             [
> >                 leader_elected_cc_migration,
> >                 leader_elected_uaa_migration,
> >                 deploy_cc_clock,
> >              ]
> >         }
> >     }
> > ]
> >
> > More updates to follow.
> > -Ben
> >
> >
> > On Mon, Jun 16, 2014 at 5:10 PM, Benjamin Saller
> > <benjamin.saller@xxxxxxxxxxxxx> wrote:
> >>
> >> Hi All,
> >>
> >> [TL;DR we are going to start generating the charms from metadata in the
> >> bundle]
> >>
> >> We've had some interesting learning during the 1st day of the sprint
> here.
> >> The most salient point being that volatility of their topology is
> higher and
> >> more constant than we originally knew. They intend to add/remove
> services
> >> from the runtime going forward in ways that don't encourage
> encapsulation
> >> into our existing charms. An example of this might be the addition of
> new
> >> -clock services which are singletons associated with services like
> >> cloud-controller which manage a cron-like service. These were recently
> >> added. To upgrade from one version of  cloudfoundry to another will
> quite
> >> often then include the mutation, not just of the charm and their runtime
> >> code, but of the topology itself.
> >>
> >> We've been in the process of converting the existing charms to be driven
> >> by what we call 'service configuration blocks' which are
> semi-declarative
> >> blocks living in hooks/config.py by convention. All of the charms
> important
> >> behaviour live with the exception of the install hook which we're
> >> redesigning currently.
> >>
> >> The proposal for dealing with the additional volatility is composed of a
> >> few parts::
> >>
> >> - Roll charm metadata up into the bundle for central management
> >>     - Charm Definitions
> >>         - charm metadata
> >>         - charm service block
> >> - Add information to the bundle mapping between a range of CF releases
> (eg
> >> 153-172)
> >>     - CF release jobs (templates/spec/monit files)
> >>     - list Charm Definitions mentioned above
> >>     - a topology/bundle used for all release versions in the range
> >>     - a migration from the previous topology range
> >>           (orchestrated adds/removes of new services for example)
> >>  - The bundle itself is a charm, which we deploy into the environment
> >>     - It has a service setting of which cf-release to run
> >>     - It generates a local (inside the environment) charm repo of charms
> >>       from the above definitions (they are all data-drive boiler plate
> at
> >> this point)
> >>     - If an existing version is/was deployed the bundle's charm instance
> >> arranges
> >>       whatever migrations scripts are needed (often via juju run, but
> also
> >> as
> >>       (de)provisioning decisions executed in the running bundle
> >>
> >> This should solve a slew of issues
> >>
> >>    - Generate rather than write boilerplate
> >>    - A place to include topology altering code
> >>    - The ability to generate charms with compiled assets or direct asset
> >> acquisition
> >>       to a central mirror
> >>   - A way to leverage those parts of our charm level orchestration which
> >> persist from
> >>     topology range to range.
> >>
> >> I hope to add more follow on to this as the sprint progresses, but its
> >> already looking pretty good.
> >>
> >> Thanks,
> >> -Ben
> >
> >
> >
> > --
> > Mailing list: https://launchpad.net/~cf-charmers
> > Post to     : cf-charmers@xxxxxxxxxxxxxxxxxxx
> > Unsubscribe : https://launchpad.net/~cf-charmers
> > More help   : https://help.launchpad.net/ListHelp
> >
>

Follow ups

References