gnu-arch-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] GCC v. Arch address@hidden: Regressions on mainline


From: Andrew Suffield
Subject: Re: [Gnu-arch-users] GCC v. Arch address@hidden: Regressions on mainline]
Date: Wed, 23 Jun 2004 12:39:03 +0100
User-agent: Mutt/1.5.6+20040523i

On Tue, Jun 22, 2004 at 07:15:47PM -0700, Tom Lord wrote:
> 
>     > From: Andrew Suffield <address@hidden>
> 
>     > Huh, interesting timing. I've been thinking about this problem for a
>     > week or two, and started to put together some of the intrastructure it
>     > needs.
> 
>     > Certainly gcc is a good example of a project which this problem, but
>     > I'm not convinced their approach is the best solution. A PQM-driven
>     > mainline that only allows commits which do not cause regressions is
>     > probably what they really want. But it's easy enough to handle what
>     > they currently do.
> 
> I thought about a PQM-driven Aegis-like protected mainline but I don't
> think it works out unless you do it in a _fairly_ hairy way.
> 
> GCC commits happen too fast (last I checked) to serialize them while
> inserting tests between each one.
> 
> I.e., just naively dropping a "make test" call into your PQM just
> before the "tla commit" --- probably the commit queue will grow
> without bound (until the developers notice and say, hey, this isn't
> working :-).

I'm thinking of a system along the lines of tinderbox, as used by
mozilla - there is a group of hosts which just sit there endlessly
building and running the test suite, against the most recent revision
which has not yet been tested on this target platform (in practice it
would be a little more intelligent than this, and allocate revisions
to hosts so that test runs would tend to cluster on individual
revisions, rather than being spread around).

When some revision reaches the threshold (for gcc this is "one clean
test run on any hosted target"), it's merged. If any test fails
(unexpectedly), then (a) immediate mail notifications are sent, and
(b) a search begins, running just that test (and therefore faster),
looking for a revision that either predates or postdates the
failure. If it finds one, it runs the full test suite; if that passes,
it's found an acceptable merge point. (This process runs in parallel
to the endless builds of recent changesets)

This system has an interesting property: the queue never grows. It is
held at an average of (commit rate) * (time for 1.5 build+test runs on
the slowest required target) revisions behind everything else. That's
going to be an hour or two for gcc - which is probably acceptable. By
simply adding more hardware the system will fill in details for more
revisions. Slower targets will fill in less revisions, but you still
get consistent monitoring (and if you're patient enough, you get told
which changeset caused the regression).

I've left out a whole host of details, but I think this scheme can
work. Here's the last trick: it's completely orthogonal. "Proof of
concept" takes the form of the system using the dogpile branch as its
solitary feed; as developers observe that it works and works better,
they can individually elect to work on a different branch and have
that branch monitored for merging.

-- 
  .''`.  ** Debian GNU/Linux ** | Andrew Suffield
 : :' :  http://www.debian.org/ |
 `. `'                          |
   `-             -><-          |

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]