qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: no more pullreq processing til February


From: Daniel P . Berrangé
Subject: Re: no more pullreq processing til February
Date: Thu, 26 Jan 2023 14:18:23 +0000
User-agent: Mutt/2.2.9 (2022-11-12)

On Thu, Jan 26, 2023 at 01:52:39PM +0000, Eldon Stegall wrote:
> On Thu, Jan 26, 2023 at 01:22:32PM +0000, Peter Maydell wrote:
> > Hi; we've run out of gitlab CI pipeline minutes for this month.
> > This leaves us the choice of:
> >  (a) don't process any more pullreqs til we get more minutes in Feb
> >  (b) merge pullreqs blindly without CI testing
> >  (c) buy more minutes
> > 
> > For the moment I propose to take option (a). My mail filter will
> > continue to track pullreqs that get sent to the list, but I won't
> > do anything with them.
> > 
> > If anybody has a better suggestion feel free :-)
> 
> Would it be possible if (d) were to run self-hosted instances of the
> runner? I am not sure how gitlab pricing works, but I believe on github
> self-hosted runners are free.
> 
> I have several baremetal machines colocated that I could dedicate to
> execute these runs, dual processor xeons with a couple hundred gigs of
> RAM. I would need approx 48 hours notice to initially provision the
> machines. I would be happy to provide root credentials and work out IPMI
> access if that becomes necessary.

We do currently have some private runners registered against the
/qemu-project namespace, so we can do some non-x86 native testing.

The challenge is the integration and configuration. The GitLab CI
YAML config rules need to be written such that specific jobs  get
targetted for the right private runners, instead of the shared
runners, by including the 'tags' element in the job config, and
some 'rules' logic.

Any job we switch to work against private runners though, now become
inaccessible to our contributors who are running pipelines in their
forks, because the tags we add won't match against public shared
runners. So we'd be putting a burden on our contributors to run
private runners two, which is not desirable.

The alternative is to duplicate all our jobs, once for private
runners and once for shared runners. It is a bit repetative but
with job inheritance it isn't a 100% copy+paste job, just about
20-30% tedious boilerplate perhaps.

eg


avocado-system-debian:
  extends: .avocado_test_job_template
  needs:
    - job: build-system-debian
      artifacts: true
  variables:
    IMAGE: debian-amd64
    MAKE_CHECK_ARGS: check-avocado

would have to be replaced with


.avocado-system-debian_base:
  extends: .avocado_test_job_template
  needs:
    - job: build-system-debian
      artifacts: true
  variables:
    IMAGE: debian-amd64
    MAKE_CHECK_ARGS: check-avocado

avocado-system-debian-shared:
  extends: .avocado-system-debian_base
  rules:
    - if '$CI_PROJECT_NAMESPACE == "qemu-project"'
      when: never
    - if '$CI_PROJECT_NAMESPACE != "qemu-project"'
      when: on_success

avocado-system-debian-private:
  extends: .avocado-system-debian_base
  tags:
    - qemu-private-runner-x86
  rules:
    - if '$CI_PROJECT_NAMESPACE == "qemu-project"'
      when: on_success
    - if '$CI_PROJECT_NAMESPACE != "qemu-project"'
      when: never


there's many variations, that's just a crude example off top of my head.
This example wouldn't work if the base project incldues 'rules' as the
parent rules don't get merged. So actually we would need to play some
further games to get this to work in most cases.

Anyway, private runners are potentially useful, especially if this becomes
a long term problems for QEMU. They just aren't a quickly insertable
solution we can deploy in a matter of days, as we need much YML config
work first AFAICT.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]