qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Proposal for a regular upstream performance testing


From: Stefan Hajnoczi
Subject: Re: Proposal for a regular upstream performance testing
Date: Mon, 30 Nov 2020 13:23:00 +0000

On Thu, Nov 26, 2020 at 09:43:38AM +0000, Daniel P. Berrangé wrote:
> On Thu, Nov 26, 2020 at 09:10:14AM +0100, Lukáš Doktor wrote:
> > Ideally the community should have a way to also issue their custom builds
> > in order to verify their patches so they can debug and address issues
> > better than just commit to qemu-master.
> 
> Allowing community builds certainly adds an extra dimension of complexity
> to the problem, as you need some kind of permissions control, as you can't
> allow any arbitrary user on the web to trigger jobs with arbitrary code,
> as that is a significant security risk to your infra.

syzkaller and other upstream CI/fuzzing systems do this, so it may be
hard but not impossible.

> I think I'd just suggest providing a mechanism for the user to easily spin
> up performance test jobs on their own hardware. This could be as simple
> as providing a docker container recipe that users can deploy on some
> arbitrary machine of their choosing that contains the test rig. All they
> should need do is provide a git ref, and then launching the container and
> running jobs should be a single command. They can simply run the tests
> twice, with and without the patch series in question.

As soon as developers need to recreate an environment it becomes
time-consuming and there is a risk that the issue won't be reproduced.
That doesn't mean the system is useless - big regressions will still be
tackled - but I think it's too much friction and we should aim to run
community builds.

> > The problem with those is that we can not simply use travis/gitlab/...
> > machines for running those tests, because we are measuring in-guest
> > actual performance.
> 
> As mentioned above - distinguish between the CI framework, and the
> actual test runner.

Does the CI framework or the test runner handle detecting regressions
and providing historical data? I ask because I'm not sure if GitLab CI
provides any of this functionality or whether we'd need to write a
custom CI tool to track and report regressions.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]