← Back to team overview

openstack team mailing list archive

Re: Issues with Packaging and a Proposal

 

2011/8/24 Monty Taylor <mordred@xxxxxxxxxxxx>:
> - Lack of Debian support. One of the largest current deployments of
> OpenStack is Rackspace, and they deploy on Debian. (disclaimer, they
> also pay my salary) This means they are internally maintaining their own
> set of packages, and not interacting with ours at all. That sort of sucks.

This is false. The packages are perfectly capable of building and
running on Debian (barring currently unknown bugs). Whether *the rest
of Debian* is ready for it is a different story. We chose Ubuntu to
begin with not just because we have Ubuntu core devs on the team, but
because Ubuntu was the only distro that met our needs:
 a) A release cycle that lined up with what we wanted.
 b) A fresh set of packages in its current release.
 c) A sane, dependable way to get required changes into development releases.

Debian releases are rare and unpredictable. If you want a fresh set of
libraries on top of which you can build your stuff any Debian release
is out of the question and I don't think having a moving target (like
unstable or testing) as a deployment target is a very sound idea.
Imagine Cactus had been targeted for testing and we tested it and it
all worked on release day, but the next day someone changed something
in testing, but Cactus was already out of the door, frozen. That'd
suck.

Alternatively, you have to maintain a set of backports, but that's a
major pain (and it only gets worse over time).

> - Lack of collaboration. Related to the above, but slightly different.
> Since the Rackspace internal deployment team have to maintain their own
> packages, it means they are not using the published packages and we are
> not benefiting from their real-world deployment experience.
> Additionally, the Cloudbuilders consulting team doesn't use our
> published packages, and similarly we aren't really benefiting from them
> directly working with us on packaging.

Rackspace isn't doing their own packaging because of (lack of) Debian
support. If they did, they'd have realised almost immediately that the
packages actually build on Debian. They're doing it because there'll
supposedly be differences in the behaviour Ubuntu (or any other distro
for that matter) will want and what Rackspace will want from the
packages. I've becried this divergence countless times, but to no
avail.

That cloudbuilders aren't using the packages either.. I don't really
know what to say about that.

> - PPAs are Async. This makes integration testing harder.

PPAs were chosen because
a) they are almost exactly identical to the builders for Ubuntu
proper, so the on-commit-to-trunk builds serve as way to know
immediately if we introduce a change that will break when uploaded to
Ubuntu proper. So, they're the closest we'll get to integration tests
for real Ubuntu builds,
b) we don't have to maintain them, and
c) adding a PPA as a source to an Ubuntu system is incredibly
straight-forward and a very well-supported operation.

> If it's building them locally, and that's what's being tested, why
> wouldn't that be what's published?

Because of c) above and because PPAs are trustworthy. There's no way
you can sneak extra packages into the chroot or otherwise compromise
its integrity, there's no way you can reach the internet from the
builders (and vice versa), and they're built in a trusted environment
(run by people who are used to managing this exact sort of
infrastructure).

> - Divergent dev process from the rest of OpenStack. The PPB just
> recently voted that all OpenStack projects must use the same set of
> tooling. Now granted, packaging is not a core project, so we could get
> legalistic and claim that we're not required to comply - but that seems
> a little disingenuous - especially since I was a strong proponent of the
> one-set-of-tooling model. Additionally, since we have a strong code
> review/testing system that's integrated in to our build and deployment
> system, it seems very odd not to take advantage of it. (having new
> packages created triggered by both approval of packaging branch changes
> AND new upstream tarballs seems kindof ideal, no?)

The packaging effort is far, far from an OpenStack-only operation.
It's a shared effort with the Ubuntu team. We chose tools and
processes for packaging that lined up with what is customary in Ubuntu
in an effort to build confidence in the tools and automation we built.
It maps very, very well with how things are usually done in Ubuntu.

> - Not integrated with CI. PPA build errors are not reported inside of
> our CI system where all of the rest of our errors are reported.
> Additionally, those build errors are not reproducible by devs locally,
> as they cannot spin up a launchpad PPA builder.

I don't understand.
a) Building a chroot and setting up sbuild is a matter of installing
one pacakge and running one command,
b) further up you argued that building packages ourselves is pretty
much identical to building in a PPA and further down you say you have
puppet magic to create a build environment for devs. Those two things
put together sure sound to me like they can get infinitesimally close
to the same sort of environment as you see in Launchpad,
c) it's going to be exactly as difficult/easy to create a build
environment that mimicks what you're doing in Jenkins, isn't it?

> - Lack of RPM support. Turns out RPM distros are still kind of popular,
> and at least one of our major contributors (hi, NTT) rolls out on
> RHEL/CentOS, and one of our partners who is going to be contributing
> testing hardware (MSFT/Novell) is associated with a distro (SuSE) that
> is RPM based. We're a few steps away from this, but it warrants mentioning.

This probably warrants a separate discussion. As pointed out further
up, Ubuntu was chosen in part because it's very up-to-date. With a
release every 6 months, we're never far behind in terms of access to
dependencies in the most recent release, and since our release cycles
line up, we can *ensure* availability of the dependencies we require
in the upcoming release.  If
RHEL/CentOS/ScientificLinux/WhatEverOtherFlavourOfRedHatYouPrefer
becomes a supported target, will this block stuff from landing in Nova
until RHEL catches up with the rest of the world in terms of libraries
upon which we depend?

> - Murky backported deps. Some of the backported deps in the PPAs came
> from a dir on my laptop. Some are from Theirry's laptop. Some are from
> Soren's laptop. There is neither documentation on how they got there,
> how we decided to put them there, how we built them or how to continue
> to collaborate on them.

Sorry, what? They're all built in PPAs. That's about as clean-room as
it gets. Everyone is free to go and see and inspect them. The diff
between that version upon which the backport is based and what got
uploaded is trivially easy to extract. Documenting how it's done
sounds pointless. Anyone who's familiar with Ubuntu/Debian packaging
will find no surprises. If people want to look at OpenStack code, it's
not our job to document what Python is all about. If people want to
see how I backported something, it's not my job to teach them
packaging first.

> - Direct injection to Ubuntu not via Debian. The publically stated
> best-practice for things entering Ubuntu is to be added to/uploaded to
> Debian and then synced, unless there is some compelling reason not to do
> this. In this case, we had Ubuntu devs and no Debian devs in the core,
> so we keep improving packages and adding packages to Ubuntu but are not
> doing the same to Debian.

For the umpteenth time, this is not about OpenStack. It's about the
dependencies. Getting OpenStack itself into Debian was never the hard
part. Getting all the dependencies into Debian was. Not just libraries
that weren't in Debian at all (like gflags, which I got uploaded to
Debian and synced into Ubuntu afterwards), but existing libraries in
Debian that we needed either a newer version of or a patch applied to.
In Ubuntu we have the power to upload these newer versions or patches
or whatever, by implicitly accepting responsibility for it if it
should happen to cause problems. We have no way to even begin to
ensure that we can do this in Debian. Debian packages have
maintainers. The maintainers decide whether to accept a patch or not.
If they're not comfortable with a patch, rejecting it is entirely
their prerogative (and for good reason! They're the ones who
ultimately have to support it). In every Ubuntu release since we
started work on OpenStack, I've added at least one patch to libvirt to
support something we needed for OpenStack. I've submitted them
upstream (to libvirt) as well, but we can't stop everything while
waiting for libvirt to make a release, the Debian maintainer to pick
it up, and for Ubuntu to sync it.

Debian support is a massive undertaking and I'm thrilled to see it
happen, but I don't believe Debian is suitable as a reference platform
(if that means that failure there can block anything at all).

> - PPA Fragmentation. The PPB voted that we are one project, yet we have
> a separate PPA for each project.

For trunk builds, yes. Two reasons:
a) Avoid cascading failures: Nova depends on glance which depends on
Swift. It would be a shame if a bug in Swift had to bring everything
else to a screeching halt.
b) If someone wants to help test the bleeding edge of Swift, but isn't
really interested in whatever breakage we introduce in Nova, having
separate PPA's makes it straightforward to just test one of them.

For final releases, we put everything in the same PPA. I've written
tools to keep track of divergence between the PPA's and to bring them
in sync so the final consolidation doesn't hold any surprises. Putting
everything in the same PPA is a trivial change, though.

-- 
Soren Hansen
Ubuntu Developer    http://www.ubuntu.com/
OpenStack Developer http://www.openstack.org/


Follow ups

References