qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/4] mips: Add more Avocado tests


From: Philippe Mathieu-Daudé
Subject: Re: [Qemu-devel] [PATCH 0/4] mips: Add more Avocado tests
Date: Thu, 23 May 2019 11:38:34 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

On 5/23/19 1:07 AM, Eduardo Habkost wrote:
> On Wed, May 22, 2019 at 05:46:06PM -0400, Cleber Rosa wrote:
>> ----- Original Message -----
>>> From: "Eduardo Habkost" <address@hidden>
>>> On Tue, May 21, 2019 at 01:19:06AM +0200, Philippe Mathieu-Daudé wrote:
>>>> Hi,
>>>>
>>>> It was a rainy week-end here, so I invested it to automatize some
>>>> of my MIPS tests.
>>>>
>>>> The BootLinuxSshTest is not Global warming friendly, it is not
>>>> meant to run on a CI system but rather on a workstation previous
>>>> to post a pull request.
>>>> It can surely be improved, but it is a good starting point.
>>>
>>> Until we actually have a mechanism to exclude the test case on
>>> travis-ci, I will remove patch 4/4 from the queue.  Aleksandar,
>>> please don't merge patch 4/4 yet or it will break travis-ci.
>>>
>>> Cleber, Wainer, is it already possible to make "avocado run" skip
>>> tests tagged with "slow"?
>>>
>>
>> The mechanism exists, but we haven't tagged any test so far as slow.
>>
>> Should we define/document a criteria for a test to be slow?  Given
>> that this is highly subjective, we have to think of:
>>
>>  * Will we consider the average or maximum run time (the timeout
>>    definition)?
>>  
>>  * For a single test, what is "slow"? Some rough numbers from Travis
>>    CI[1] to help us with guidelines:
>>    - boot_linux_console.py:BootLinuxConsole.test_x86_64_pc:  PASS (6.04 s)
>>    - boot_linux_console.py:BootLinuxConsole.test_arm_virt:  PASS (2.91 s)
>>    - 
>> linux_initrd.py:LinuxInitrd.test_with_2gib_file_should_work_with_linux_v4_16:
>>   PASS (18.14 s)
>>    - boot_linux.py:BootLinuxAarch64.test_virt:  PASS (396.88 s)
> 
> I don't think we need to overthink this.  Whatever objective
> criteria we choose, I'm sure we'll have to adapt them later due
> to real world problems.
> 
> e.g.: is 396 seconds too slow?  I don't know, it depends: does it
> break Travis and other CI systems often because of timeouts?  If
> yes, then we should probably tag it as slow.
> 
> If having subjective criteria is really a problem (I don't think
> it is), then we can call the tag "skip_travis", and stop worrying
> about defining what exactly is "slow".

I'd go with a simpler "tags:travis-ci" whitelisting any job expecting to
run smoothly there.

Then we can add "slow" tests without having to worry about blacklisting
for Travis CI.
Also, Other CI can set different timeouts.

I'd like maintainers to add as many tests as they want to upstream, so
these tests can eventually run by anyone, then each maintainer is free
to select which particular set he wants to run as default.

>>  * Do we want to set a maximum job timeout?  This way we can skip
>>    tests after a given amount of time has passed.  Currently we interrupt
>>    the test running when the job timeout is reached, but it's possible
>>    to add a option so that no new tests will be started, but currently
>>    running ones will be waited on.
> 
> I'm not sure I understand the suggestion to skip tests.  If we
> skip tests after a timeout, how would we differentiate a test
> being expectedly slow from a QEMU hang?
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]