[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] KVM call minutes for Apr 5

From: Lucas Meneghel Rodrigues
Subject: Re: [Qemu-devel] KVM call minutes for Apr 5
Date: Fri, 08 Apr 2011 09:58:22 -0300

On Thu, 2011-04-07 at 11:03 +0100, Stefan Hajnoczi wrote:
> On Tue, Apr 5, 2011 at 6:37 PM, Lucas Meneghel Rodrigues <address@hidden> 
> wrote:
> Thanks for your detailed response!
> > On Tue, 2011-04-05 at 16:29 +0100, Stefan Hajnoczi wrote:
> >> * Public notifications of breakage, qemu.git/master failures to
> >> qemu-devel mailing list.
> >
> > ^ The challenge is to get enough data to determine what is a new
> > breakage from a known issue, mainly. More related to have historical
> > data from test results than anything else, IMO.
> I agree.  Does kvm-autotest currently archive test results?

It does. Our test layouts are currently evolving, and we are hoping to
reach a very good and sane format. We are also thinking more about how
to look at historical data and stablish regressions.

> >> * A one-time contributor can get their code tested.  No requirement to
> >> set up a server because contributors may not have the resources.
> >
> > Coming back to the point that many colleagues made: We need a sort of
> > 'make test' on the qemu trees that would fetch autotest and could setup
> > basic tests that people could run, maybe suggest test sets...
> >
> > The problem I see is, getting guests up and running using configs that
> > actually matter is not trivial (there are things such as ensuring that
> > all auxiliary utilities are installed in a distro agnostic fashion,
> > having bridges and DHCP server setup on possibly a disconnected work
> > laptop, and stuff).
> >
> > So, having a 'no brains involved at all' setup is quite a challenge,
> > suggestions welcome. Also, downloading isos, waiting for guests to
> > install and run thorough tests won't be fast. So J. Random Developer
> > might not bother to run tests even if we can provide a fool proof,
> > perfectly automated setup, because it'd take a long time at first to get
> > the tests run. This is also a challenge.
> I'm actually starting to think that there is no one-size-fits-all solution.
> Developers need "make check"-type unit tests for various QEMU
> subsystems.  kvm-autotest could also run these unit tests as part of
> its execution.
> Then there are end-to-end acceptance tests.  They simply require
> storage, network, and time resources and there's no way around that.
> These tests are more suited to centralized testing infrastructure that
> periodically tests qemu.git.
> On the community call I was trying to see if there is a "lightweight"
> version of kvm-autotest that could be merged into qemu.git.  But now I
> think that this isn't realistic and it would be better to grow unit
> tests in qemu.git while covering it with kvm-autotest for acceptance
> testing.

The "make check" could check out autotest in the background and execute
a very simplistic set of test, with pre-made small linux guests, very
much as jenkins + buildbot does. If we can figure a sane, automated
bridge + dnsmasq setup, then we can provide both the unittests and very
simple and restricted guest tests. Need to think more.

> >> Perhaps kvm-autotest is a good platform for the automated testing of
> >> ARM TCG.  Paul is CCed, I recently saw the Jenkins qemu build and boot
> >> tests he has set up.  Lucas, do you have ideas on how these efforts
> >> can work together to bring testing to upstream QEMU?
> >> http://validation.linaro.org/jenkins/job/qemu-boot-images/
> >
> > I heard about jenkins before and it is indeed a nice project. What they
> > do here, from what I could assess browsing at the webpage you provided
> > is:
> >
> > 1) Build qemu.git every time there are commits
> > 2) Boot pre-made 'pristine' images, one is a lenny arm image and the
> > other is a linaro arm image.
> >
> > It is possible to do the same with kvm autotest, just a matter of not
> > performing guest install tests and executing only the boot tests with
> > pre-made images. What jenkins does here is a even quicker and shorter
> > version of our sanity jobs.
> >
> > About how we can work together, I thought about some possibilities:
> >
> > 1) Modify the jenkins test step to execute a kvm autotest job after the
> > build, with the stripped down test set. We might gain some extra debug
> > info, that the current test step does not seem to provide
> > 2) Do the normal test step and if that succeeds, trigger a kvm autotest
> > job that does more comprehensive testing, such as migration, time drift,
> > block layer, etc
> >
> > The funny thing is that KVM autotest has infrastructure to do the same
> > as jenkins does, but jenkins is highly streamlined for the buildbot use
> > case (continuous build and integration), and I see that as a very nice
> > advantage. So I'd rather keep use jenkins and have kvm autotest plugged
> > into it conveniently.
> That sounds good.  I think the benefit of working together is that
> different entities (Linaro, Red Hat, etc) can contribute QEMU tests
> into a single place.  That testing can then cover to both upstream and
> downstream to prevent breakage.
> So kvm-autotest can run in single job mode and kicked off from jenkins
> or buildbot?
> It sounds like kvm-autotest has or needs its own cron, result
> archiving, etc infrastructure.  Does it make sense to use a harness
> like jenkins or buildbot instead and focus kvm-autotest purely as a
> testing framework?

In the context that there are already jenkins/buildbot servers running
for qemu, having only the KVM testing part (autotest client + kvm test)
is a possibility, to make things easier to plug and work with what is
already deployed.

However, not possible to focus KVM autotest as a 'test framework'. What
we call KVM autotest is in actuality, a client test of autotest.
Autotest is a generic, large collection of programs and libraries
targeted at peforming automated testing on the linux platform, it was
developed to test the linux kernel itself, and it is used to do
precisely that. Look at test.kernel.org. All those tests are executed by

So, autotest is much more than KVM testing, and I am one of the autotest
maintainers, so I am commited to work on all parts of that stack.
Several testing projects urelated to KVM use our code, and our
harnessing and infrastructure is already pretty good, we'll keep to
develop it.

The whole thing was designed in a modular way, so it's doable to use
parts of it (such as the autotest client and the KVM test) and integrate
with stuff such as jenkins and buildbot, and if people need and want to
do that, awesome. But we are going to continue develop autotest as a
whole framework/automation utilities/API, while developing the KVM test.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]