qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: "make check-acceptance" takes way too long


From: Cleber Rosa
Subject: Re: "make check-acceptance" takes way too long
Date: Tue, 1 Feb 2022 14:04:09 -0500

On Tue, Feb 1, 2022 at 1:06 PM Alex Bennée <alex.bennee@linaro.org> wrote:
>
>
> Cleber Rosa <crosa@redhat.com> writes:
>
> > On Tue, Feb 1, 2022 at 11:20 AM Daniel P. Berrangé <berrange@redhat.com> 
> > wrote:
> >>
> >> On Tue, Feb 01, 2022 at 11:01:43AM -0500, Cleber Rosa wrote:
> >> > On Tue, Feb 1, 2022 at 6:25 AM Alex Bennée <alex.bennee@linaro.org> 
> >> > wrote:
> >> > >
> >> > > We have up to now tried really hard as a project to avoid building and
> >> > > hosting our own binaries to avoid theoretical* GPL compliance issues.
> >> > > This is why we've ended up relying so much on distros to build and host
> >> > > binaries we can use. Most QEMU developers have their own personal zoo 
> >> > > of
> >> > > kernels and userspaces which they use for testing. I use custom kernels
> >> > > with a buildroot user space in initramfs for example. We even use the
> >> > > qemu advent calendar for a number of our avocado tests but we basically
> >> > > push responsibility for GPL compliance to the individual developers in
> >> > > that case.
> >> > >
> >> > > *theoretical in so far I suspect most people would be happy with a
> >> > > reference to an upstream repo/commit and .config even if that is not to
> >> > > the letter of the "offer of source code" required for true compliance.
> >> > >
> >> >
> >> > Yes, it'd be fine (great, really!) if a lightweight distro (or
> >> > kernels/initrd) were to
> >> > be maintained and identified as an "official" QEMU pick.  Putting the 
> >> > binaries
> >> > in the source tree though, brings all sorts of compliance issues.
> >>
> >> All that's really needed is to have the source + build recipes
> >> in a separate git repo. A pipeline can build them periodically
> >> and publish artifacts, which QEMU can then consume in its pipeline.
> >>
> >
> > I get your point, but then to acquire the artifacts one needs to:
> >
> > 1. depend on the CI system to deploy the artifacts in subsequent job
> > stages (a limitation IMO), OR
> > 2. if outside the CI, implement a download/cache mechanism for those
> > artifacts, which gets us back to the previous point, only with a
> > different distro/kernel+initrd.
> >
> > With that, the value proposal has to be in the characteristics of
> > distro/kernel+initrd itself. It has to have enough differentiation to
> > justify the development/maintenance work, as opposed to using existing
> > ones.
> >
> > FWIW, my non-scientific tests booting on my 3+ YO machine:
> >
> > * CirrOS x86_64+KVM: ~2 seconds
> > * CirroOS aarch64+TCG: ~20 seconds
> > * Fedora kernel+initrd aarch64+TCG
> > (tests/avocado/boot_linux_console.py:BootLinuxConsole.test_aarch64_virt):
> > ~1 second
> >
> > I would imagine that CirrOS aarch64+KVM on an adequate system would be
> > similar to the CirrOS x86_64+KVM.  We can develop/maintain a slimmer
> > distro, and/or set the default test workloads where they perform the
> > best.  The development cost of the latter is quite small.  I've added
> > a missing bit to the filtering capabilities in Avocado[1] and will
> > send a proposal to QEMU along these lines.
>
> FWIW the bit I'm interested in for the slow test in question here is
> that it does a full boot through the EDK2 bios (EL3->EL2->EL1). I'm not
> overly concerned about what gets run in userspace as long as something
> is run that shows EL0 can be executed and handle task switching. I
> suspect most of the userspace startup of a full distro basically just
> ends up testing the same code paths over and over again.
>

That's an interesting point.

Does that mean that ,if you are able to determine a condition that the
boot has progressed far enough, you would consider the test a success?
 I mean, that's what the "boot_linux_console.py" tests do: they find a
known pattern in the console, and do not care about what happens next.

The same could be done with the "full blown distro boot" tests
(boot_linux.py). They could be configurable to consider a "successful
boot"  anything, not just a "login prompt" or a "fully initialized and
cloud-init configured system".  We can reuse most of the same code,
and add configurable conditions for different test cases.

Does that make sense?

- Cleber.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]