← Back to team overview

launchpad-dev team mailing list archive

Re: Immediate plan for Build Farm generic jobs

 

Julian Edwards wrote:
> Michael Hudson wrote:
>> Er.  Hm.  I guess I don't know enough to be certain about this, but I
>> think when we build a source package we're always going to want to then
>> build it for an archive?  I would guess that you can handle
>> republication by using the 'copy package' feature that exists already?
> 
> Possibly.  I am just thinking of the existing workflow where it's
> possible to re-upload the same package to lots of places.

I guess at the worst you'll be able to download the source package (and
resign it?) and then reupload it...

>  I'd like to
> keep that behaviour if we can, in case "copy package" fails (it's been
> known :( )

This sounds like one of those things that isn't required for the first
cut, but that we should understand a bit more so that we don't set
ourselves up for trouble down the track.  Can you explain the use cases
you have in mind?  Is it worth designing the schema so that you can
associate a buildrecipejob with multiple archives, or would that be YAGNI?

>> I'm not sure what the difference would be, indeed.  What is the
>> conceptual difference between a Build and a BuildPackageJob?  I was
> 
> The latter is more akin to a BuildQueue, i.e. it encapsulates a
> *request* to build.
> 
> Build stores information about the build itself, like the log, the
> sourcepackagerelease, the distroarchseries, etc.

I thought that Job modelled the job itself including the result of it,
not just the request.  But clearly it doesn't have to...

>> under the impression that there was still a split mostly to avoid
>> rewriting all of Soyuz -- if not, I'd love to know (and try to remember)
>> what it is.
> 
> Correct reasoning, wrong tables :)  I didn't want to remove BuildQueue.

Ah.

>> Cool. 'exactly the same thing' even goes as far as handing them to the
>> process-upload.py script?
> 
> Yep, it just dumps the files in the incoming directory and tells the
> upload processor to deal with it.
> 
> The only minor difference is that we can't do that asynchronously
> without some code changes to process-upload, which depends on the
> changes file being signed to report problems.  We need a way of
> identifying the person who "uploaded" it (ie. requested the recipe build).

Ah right.

>>> One thing I need to change though is to stop this use of Popen since it
>>> blocks everything else on the buildd-manager.  There's a spec for this
>>> at
>>> https://blueprints.edge.launchpad.net/soyuz/+spec/buildd-manager-upload-decoupling
>> If I read the above right, this isn't actually strictly speaking
>> required to have build from recipe working?
>>
>> I can certainly see how it would be a good idea though.
> 
> It's not initially required. no.  However, it's a major scaling blocker
> as it prevents anything else happening while we wait for the upload to
> get processed - if it's a big package this can be 30-60 seconds.  A
> quick intermediate fix would be to replace the use of Popen with a hook
> into the python module itself.  This prevents initZopeless and friends
> from running on script invocation which takes 5 seconds or so and would
> be a significant improvement for small packages which only take a few
> seconds to process.

Makes sense.  I just find it helpful to have "absolutely required to
work at all" and "required to have even slightly acceptable performance"
separated in my head.

>> I think we want to key off recipe, not source package here.  But it
>> sounds easy enough (select job.date_finished - job.date_started from
>> buildsourcepackagejob, job where buildsourcepackagejob.job = job.id and
>> job.status = completed and buildsourcepackagejob.recipe = $RECIPE order
>> by job.date_started desc limit 1 or similar).
> 
> Yep, now I understand that there's more than one recipe for each package
> that makes perfect sense.

Good!

>> I think that probably makes sense.  Don't know where to do the signing
>> though -- maybe the buildd-master could do it?
> 
> It can't, it would need to happen on the builders as it's a dsc file
> signature, which in turn affects the changes file.

Ergh.  Maybe it's overly paranoid, but if it has to happen on the slave,
 there's not much meaning to signing it with any system-wide key -- I
guess you could sign it with the destination archive key though.

> It if turns out to be too hard, we can live without it though.

Good to know :)

Cheers,
mwh



References