[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] QEMU Gating CI

From: Cleber Rosa
Subject: Re: [RFC] QEMU Gating CI
Date: Sun, 2 Feb 2020 22:27:00 -0500

On Fri, Jan 17, 2020 at 02:33:54PM +0000, Peter Maydell wrote:
> On Mon, 2 Dec 2019 at 14:06, Cleber Rosa <address@hidden> wrote:
> >
> > RFC: QEMU Gating CI
> > ===================
> >
> > This RFC attempts to address most of the issues described in
> > "Requirements/GatinCI"[1].  An also relevant write up is the "State of
> > QEMU CI as we enter 4.0"[2].
> >
> > The general approach is one to minimize the infrastructure maintenance
> > and development burden, leveraging as much as possible "other people's"
> > infrastructure and code.  GitLab's CI/CD platform is the most relevant
> > component dealt with here.
> Happy New Year! Now we're in 2020, any chance of an update on
> plans/progress here? I would very much like to be able to hand
> processing of pull requests over to somebody else after the
> 5.0 cycle, if not before. (I'm quite tempted to make that a
> hard deadline and just say that somebody else will have to
> pick it up for 5.1, regardless...)
> thanks
> -- PMM

Hi Peter,

Last time I believe the take was to be as simplistic as possible, and
try to focus on the bare mininum necessary to implement the workflow
you described[1].  The following lines preceded by ">>>" were
extracted from the Wiki and will be used to explain those points.

   >>> The set of machine I currently test on are:
   >>>  * an S390x box (this is provided to the project by IBM's Community
   >>>    Cloud so can be used for the new CI setup)
   >>>  * aarch32 (as a chroot on an aarch64 system)
   >>>  * aarch64
   >>>  * ppc64 (on the GCC compile farm)

I've built an updated gitlab-runner version for s390x, aarch64 and
ppc64[2].  I've now tested its behavior with the shell executor
(instead of docker) on aarch64 and ppc64.  I did not get a chance yet
to test this new version and executor with s390x, but I'm planning
to do it soon.

   >>>  * OSX
   >>>  * Windows crossbuilds
   >>>  * NetBSD, FreeBSD and OpenBSD using the tests/vm VMs

gitlab-runner clients are available for Darwin, Windows (native)
and FreeBSD.  I have *not* tested any of those, though.   I've
tried a Windows crossbuild, and with the right packages installed,
and worked like a charm on a Fedora machine.

   >>>  * x86-64 Linux with a variety of different build configs (see the
   >>>    'remake-merge-builds' script for how these are set up)

This is of course the more standard setup for gitlab-runner, and the
bulk of the work that I'm posting here is related to those different
build configs.  I assumed those x86-64 machines had some sort version
of Ubuntu, so I used 18.04.3 LTS.  Hopefully it maches most or all of
the current environment.  Please refer to messages on the mailing list
with $SUBJECT:

 [RFC PATCH 1/2] GitLab CI: avoid calling before_scripts on unintended jobs
 [RFC PATCH 2/2] GitLab CI: crude mapping of PMM's scripts to jobs

There are few question in there which I'd appreciate help with.

   >>> Testing process:
   >>>  * I get an email which is a pull request, and I run the
   >>>    "apply-pullreq" script, which takes the GIT URL and tag/branch name
   >>>    to test.
   >>>  * apply-pullreq performs the merge into a 'staging' branch
   >>>  * apply-pullreq also performs some simple local tests:
   >>>     * does git verify-tag like the GPG signature?
   >>>     * are we trying to apply the pull before reopening the dev tree
   >>>       for a new release?
   >>>     * does the pull include commits with bad UTF8 or bogus qemu-devel
   >>>       email addresses?
   >>>     * submodule updates are only allowed if the --submodule-ok option
   >>>       was specifically passed

These steps could go unchanged at this point.  One minor remark is
that the repo hosted at gitlab.com would be used instead.  The
'staging' branch can be protected[4] so that only authorized people
can do it (and trigger the pipeline and its jobs).

   >>>  * apply-pullreq then invokes parallel-buildtest to do the actual
   >>>    testing

This would be done by GitLab instead.  The dispatching of jobs is
based on the tags given to jobs and machines.  IMO at least the OS
version and architecture should be given as tags, and the machine
needs proper setup to run a job, such as having the right packages
installed.  It can start with a proper documentation for every type of
OS and version (and possibly job type), and evolve into scripts
or other type of automation.

These are usuall identical or very similar to what is defined in
"tests/docker/dockerfiles", but need to be done at the machine level
because of the "shell" executor.

   >>>  * parallel-buildtest is a trivial wrapper around GNU Parallel which
   >>>    invokes 'mergebuild' on each of the test machines
   >>>  * if all is OK then the user gets to do the 'git push' to push the
   >>>    staging branch to master

The central place to check for success or failure would be the
pipeline page.  Also, there's a configurable notification system that
should (I've not tested it throughly) send failed and/or successful
pipeline results to the pipeline author.  IIUC, this means whoever
pushed to the 'staging' branch that caused the pipeline to be

Let me know if this makes sense to you, and if so, we can arrange
a real world PoC.  FIY, I've run hundreds of jobs in an internal
GitLab instance, and GitLab itself (server and runner) seems very

- Cleber.


[1] - https://wiki.qemu.org/Requirements/GatingCI
[2] - https://cleber.fedorapeople.org/gitlab-runner/v12.7.0/
[3] - 
[4] - https://docs.gitlab.com/ee/user/project/protected_branches.html
[5] - 

Attachment: signature.asc
Description: PGP signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]