Pádraig Brady wrote:
@Padraig: you're at least doing some pre-release tests on several platforms,
but that's no real CI, is it?
Right.
No real CI at present.
It would be good though, directly for coreutils
and indirectly for gnulib.
We have a CI for gnulib at Gitlab [1], but it detects only failures that are
covered by the unit tests, and — as you know — some of the complicated uses
of gnulib are only covered by coreutils tests, not by gnulib tests.
We have a CI also for a couple of GNU packages at Gitlab [2][3][4][5][6][7][8],
and they occasionally detect regressions. But with only 400 free CI minutes per
month [9], you couldn't have very many "expensive" test runs with coreutils
on that platform, even when using the coreutils mirror that someone already
has set up [10].
I think I'll have a look at setting up
some automated testing on the GCC compile farm.
Reading [11], it looks like this would be feasible, if you pick a machine that
has lots of CPU power and use 'ulimit' "to reduce the risk of a DOS attack".
I agree that it would be good.
Bruno
[1] https://gitlab.com/gnulib/gnulib-ci/-/pipelines
[2] https://gitlab.com/gnu-diffutils/ci-distcheck/-/pipelines
[3] https://gitlab.com/gnu-gettext/ci-distcheck/-/pipelines
[4] https://gitlab.com/gnu-grep/ci-distcheck/-/pipelines
[5] https://gitlab.com/gnu-gzip/ci-distcheck/-/pipelines
[6] https://gitlab.com/gnu-libunistring/ci-distcheck/-/pipelines
[7] https://gitlab.com/gnu-m4/ci-distcheck/-/pipelines
[8] https://gitlab.com/gnu-sed/ci-distcheck/-/pipelines
[9] https://about.gitlab.com/pricing/
[10] https://gitlab.com/saiwp/coreutils
[11] https://gcc.gnu.org/wiki/CompileFarm