qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] more automated/public CI for QEMU pullreqs


From: Peter Maydell
Subject: [Qemu-devel] more automated/public CI for QEMU pullreqs
Date: Fri, 16 Aug 2019 19:16:55 +0100

We had a conversation some months back about ways we might switch
away from the current handling of pull requests which I do via some
hand-curated scripts and personal access to machines, to a more
automated system that could be operated by a wider range of people.
Unfortunately that conversation took place off-list (largely my fault
for forgetting a cc: at the beginning of the email chain), and in any
case it sort of fizzled out.  So let's restart it, on the mailing
list this time.

Here's a summary of stuff from the old thread and general
explanation of the problem:

My current setup is mostly just running the equivalent of
"make && make check" on a bunch of machines and configs
on the merge commit before I push it to master. I also do
a 'make check-tcg' on one of the builds and run a variant
of the 'linux-user-test' tarball of 'ls' binaries.
The scripts do some simple initial checks which mostly are
preventing problems seen in the past:
 * don't allow submodules to be updated unless I kick the
   merge off with a command line option saying submodule updates
   are OK here (this catches accidental misdriving of git by
   a submaintainer folding a submodule change into a patch
   during a rebase)
 * check we aren't trying to merge after tagging the final
   release but before doing the 'reopen development tree'
   commit that bumps the VERSION file
 * check for bogus "author is address@hidden" commits
 * check for UTF-8 mangling
 * check the gpg signature on the pullreq
A human needs to also eyeball the commits and the diffstat
for weird stuff (usually cursory for frequent pullreq submitters,
and more carefully for new submaintainers).

I have this semi-automated with some hacky scripts.  The major thing we
need for a replacement is the coverage of different host
architectures and operating systems, which is a poor match to most of
the cloud-CI services out there (including Travis).  We also want the
tests to run in a reasonably short wall-clock time from being kicked
off.

Awkward bonus extra requirement: it would be useful to be
able to do a merge CI run "privately", eg because the thing
being tested is a fix for a security bug that's not yet
public. But that's rare so we can probably do it by hand.

There are some other parts to this, like getting some kind of
project-role-account access to machines where that's OK, or finding
replacements where the machines really are my personal ones or
otherwise not OK for project access.  But I think that should be
fairly easy to resolve so let's keep this thread to the
automating-the-CI part.

The two major contenders suggested were:

(1) GitLab CI, which supports custom 'runners' which we can set
up to run builds and tests on machines we have project access to

(2) Patchew, which can handle running tests on multiple machines (eg
we do s390 testing today for all patches on list), and which we could
enhance to provide support for the release-manager to do their work

Advantages of Gitlab CI:
 * somebody else is doing the development and maintainance of the
   CI tool -- bigger 'bus factor' than patchew
 * already does (more or less) what we want without needing
   extra coding work

Advantages of Patchew:
 * we're already using it for patch submissions, so we know it's
   not going to go away
 * it's very easy to deploy to a new host
 * no dependencies except Python, so works anywhere we expect
   to be able to build QEMU (whereas gitlab CI's runner is
   written in Go, and there seem to be ongoing issues with getting
   it actually to compile for other architectures than x86)

I don't have an opinion really, but I think it would be good to
make a choice and start working forwards towards getting this
a bit less hacky and a bit more offloadable to other people.

Perhaps a good first step would be to keep the 'simple checks
of broken commits' part done as a local script but have the
CI done via "push proposed merge commit to $SOMEWHERE to
kick off the CI".

Input, opinions, recommendations, offers to do some of the work? :-)

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]