yellow team mailing list archive
-
yellow team
-
Mailing list archive
-
Message #00654
Starting a working buildbot juju cluster
I got it working, with some hiccups. This documents what I did in case
someone comes along to try it themselves.
1) I use an 8-core set up on ec2. In my ~/.juju/environments.yaml I
have "default: big-ec2" and then big-ec2 looks something like this:
big-ec2:
type: ec2
control-bucket: juju-[UUID goes here]
admin-secret: [secret goes here]
access-key: [key goes here]
secret-key: [secret key goes here]
default-series: precise
juju-origin: ppa
default-instance-type: m2.4xlarge
default-image-id: [64bit ebs image id from
http://uec-images.ubuntu.com/releases/precise/ goes here]
I personally use "python -c 'import uuid; print uuid.uuid4()'" to
generate those uuids, fwiw.
2) The image I had, and the apt sources configured, only had up to lxc
release 45. We need 47 or higher. It turns out I used a beta 1 image;
maybe if I had used a beta 2 image
(http://uec-images.ubuntu.com/releases/precise/beta-2/) it would have
been fixed. I manually changed my apt sources to the sources I use on
my own machine (the official Ubuntu sources) rather than the ec2
version, and then did an update/upgrade. This gave me lxc version 48.
I did this before setuplxc had a chance to make an lxc, so then the
slave started up fine.
3) "juju expose buildbot-master" didn't work for some reason for me. It
had before. It said it performed the right thing, but then I couldn't
see the web page on 8010. I ended up manually making a change in the
AWS console to the appropriate security group. I didn't know if this
was maybe because of some idiosyncracy of what I had done (the master
had an earlier problem in lpbuildbot--a SyntaxError in the
master.cfg--that I fixed and I'm not mentioning it here because it
shouldn't affect the next person). If the broken expose happens again,
we should investigate.
Tests are running now (with --shuffle). I'll report back the results
when I have them (in an hour or so, hopefully!)
Gary
Follow ups