Build system meeting
Participants
16:19 < jg> OK, I'm Jim Gettys, V.P. Software, OLPC.... infamous for X11, http, handhelds.org
16:19 < gregdek> Greg DeKoenigsberg, community development, Red Hat.
16:19 < spevack> Max Spevack, Fedora Project
16:19 < m_stone> I'm Michael Stone. I do security stuff and work on all the infrastructure (build-system, testing, ...) required to make that go. (for OLPC)
16:20 < c_scott> I'm C. Scott Ananian, I fix things that are broken. (for OLPC)
16:20 < f13> Jesse Keating - Fedora Project Release Engineer and general Get Stuff Done kind of guy.
16:20 < notting> Bill Nottingham, Red Hat/Fedora generalist
16:21 < dgilmore> Im Dennis Gilmore, Im one of teh community members for Fedora Project that help maintain Fedora's build system
16:21 < _bernie> I'm Bernardo Innocenti, full time volunteer developer at OLPC
16:22 < mbonnet> Mike Bonnet, Red Hat release engineer and one of the developers/maintainers of the Koji build system used by Fedora
Agenda
How Fedora Works
(by Jesse Keating: f13)
The basic rundown is this. We have a public CVS tree. People gain write access to this tree by creating accounts in our Fedora Account System. we have a basic "sponsor" system by which somebody who is of "sponsor" level agrees to become responsible for you as a member. Our CVS system has ACLs on it such that a maintainer can allow all authed users write access, individual users, or no users but themselves. Anybody who's sponsored in the account system has rights to build packages. regardless of commit access. We have a branching system where we branch the source control for each release, and now for subprojects like OLPC as needed and we have a fine grained enough ACL system that there can be different rights on the branches.
The buildsystem itself could be a longer discussion but the basic rundown is that it uses a database to "tag" builds of rpms for collections (buckets if you will). There is the concept of inheritance so that you can easily bootstrap a new bucket without rebuilding everything.
mbonnet: but the salient points of the build system are building directly from CVS tags, pristine buildroots for every build, tracking of build dependencies and buildroot contents, and tracking of build output
f13: The buildsystem uses the contents of these buckets (and buckets it they inherit from) to populate "buildroots" to build packages in. For OLPC we are using a convenient side effect of the buildsystem in that the package repository the buildsystem creates to populate buildroots is suitable as a package repository for OLPC to create system images from. These repos are created on demand, whenever an action happens that would lead to a change in the package set visible in the bucket. (howver if there is already a repo creation in progress, then another repocreation will be delayed until the first one finishes. Only one at a time will run) That's probably a good overview an I could continue on to the pain points if nobody objects.
c_scott: i'm not surei understand the 'side effect' bit
f13: Our buildsystem uses standard components to work from
c_scott: how is olpc's final build different from red hat's?
f13: yum, mock, rpm, etc.. We create yum repos that mock uses to create a fakeroot build environment. since these are standard tools, the OLPC image creation tool makes use of yum repos too. So the yum repo that the buildsystem uses can be the /same/ repo that the image creation tool uses.
c_scott: how does red hat create its public yum repos then?
f13: OLPC's build is typically the result of installing rpms into a fake root, stripping things out, and turning it into a file image. Fedora has to go through a few more steps due to shipping on multiple arches. that and our end product is not a disk image (rather not only a disk image), but also an installable media that has the raw rpms on it.
c_scott: could you elaborate? what additional steps are needed? why can't you run a disk image-creator based on an RPM repository, like we do?
f_13: we have to pull packages out of the build system and set them up on a file system in a certain layout, do some post-processing to make it multilib for some arches (IE allow i386 content in an x86_64 tree) and then run anacona tools (or wrappers of such) to make the tree "installable". we can, for the disk images we create (IE our Live images)