qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/1] Acceptance tests: add test for set-numa-nod


From: Cleber Rosa
Subject: Re: [Qemu-devel] [PATCH 1/1] Acceptance tests: add test for set-numa-node error handler fix
Date: Wed, 21 Nov 2018 11:17:33 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.0


On 11/21/18 10:50 AM, Markus Armbruster wrote:
> Cleber Rosa <address@hidden> writes:
> 
>> Commit a22528b918 fixed an issue that is exposed by means of the
>> "set-numa-node" QMP command (introduced in f3be67812).  This adds a
>> test that pretty much maps the steps documented on the fix.
>>
>> Additionally, given that 'set-numa-node' is only allowed in
>> 'preconfig' state, a specific check for that was added a separate
>> test.
>>
>> Tests: a22528b918c7d29795129b5a64c4cb44bb57a44d
>> Reference: f3be67812c226162f86ce92634bd913714445420
>> CC: Igor Mammedov <address@hidden>
>> CC: Markus Armbruster <address@hidden>
>> Signed-off-by: Cleber Rosa <address@hidden>
>> ---
>>  tests/acceptance/set_numa_node.py | 41 +++++++++++++++++++++++++++++++
>>  1 file changed, 41 insertions(+)
>>  create mode 100644 tests/acceptance/set_numa_node.py
>>
>> diff --git a/tests/acceptance/set_numa_node.py 
>> b/tests/acceptance/set_numa_node.py
>> new file mode 100644
>> index 0000000000..0c55315231
>> --- /dev/null
>> +++ b/tests/acceptance/set_numa_node.py
>> @@ -0,0 +1,41 @@
>> +# Tests for QMP set-numa-node related behavior and regressions
>> +#
>> +# Copyright (c) 2018 Red Hat, Inc.
>> +#
>> +# Author:
>> +#  Cleber Rosa <address@hidden>
>> +#
>> +# This work is licensed under the terms of the GNU GPL, version 2 or
>> +# later.  See the COPYING file in the top-level directory.
>> +
>> +from avocado_qemu import Test
>> +
>> +
>> +class SetNumaNode(Test):
>> +    """
>> +    :avocado: enable
>> +    :avocado: tags=quick,numa
>> +    """
>> +    def test_numa_not_supported(self):
>> +        self.vm.add_args('-nodefaults', '-S', '-preconfig')
>> +        self.vm.set_machine('none')
>> +        self.vm.launch()
>> +        res = self.vm.qmp('set-numa-node', type='node')
>> +        self.assertIsNotNone(res, 'Unexpected empty QMP response to 
>> "set-numa-node"')
>> +        self.assertEqual(res['error']['class'], 'GenericError')
>> +        self.assertEqual(res['error']['desc'],
>> +                         'NUMA is not supported by this machine-type')
> 
> Checking the QMP command fails a certain way takes you four lines[*].
> Such checks are pretty common in tests using QMP.  Consider creating a
> convenience method.
> 

Sure, that discussion is going on.  Part of it can be seen here:

https://trello.com/c/a4wBNsGX/63-better-test-failure-output#comment-5bf448b695549b5cfa92b638

In the near future, we'd like to have a number of convenience methods,
that would optimize both test writing time, and test debugging time,
with as relevant as possible failure messages.  Something like:

   self.assertQMP(command, 'error/desc', 'NUMA is not...')

That checks for errors general QMP error conditions is something we're
planning.

>> +        self.assertTrue(self.vm.is_running())
>> +        self.vm.qmp('x-exit-preconfig')
>> +        self.vm.shutdown()
>> +        self.assertEqual(self.vm.exitcode(), 0)
>> +
>> +    def test_no_preconfig(self):
>> +        self.vm.add_args('-nodefaults', '-S')
>> +        self.vm.set_machine('none')
>> +        self.vm.launch()
>> +        res = self.vm.qmp('set-numa-node', type='node')
>> +        self.assertIsNotNone(res, 'Unexpected empty QMP response to 
>> "set-numa-node"')
>> +        self.assertEqual(res['error']['class'], 'GenericError')
>> +        self.assertEqual(res['error']['desc'],
>> +                         "The command is permitted only in 'preconfig' 
>> state")
> 
> The test looks good to me.
> 
> It could also be done as a qtest in C.  Do we have guidance on when to
> use C / qtest and when to use Python / Avocado?
> 

TBH, I don't have such a guideline, and I'd probably be biased if I
tried to define one.  I've heard from more than one person, though, that
the perceived learning curve and general maintenance costs of
Python/Avocado tests seem to be way lower (IMMV, though).

> One possible argument for using Python more could be "tests are cheaper
> to create and easier to debug that way".  Do we have evidence for that,
> or at least gut feelings?
> 

Again, I'd be biased trying to answer that, and IMO this feels like an
emacs .vs. vi kind of question.  I dare to say that we're past that
point, and we should, if we receive any, just embrace contributions that
fall into this type of test.

> A possible argument against using Python could be much slower "make
> check".  I have no idea whether that's actually the case.
> 

My data is scarce, but considering that these two tests take 0.01s
seconds each on a low powered environment such as the one given by
Travis-CI:

https://travis-ci.org/clebergnu/qemu/jobs/457721176#L2051

I wouldn't be too worried about that right now.  And, these are not part
of "make check" anyway.

> Non-argument: requiring Avocado as a build dependency for testing.  I
> think that's totally fine as long as it's readily available on all our
> supported host platforms.  Same as for any other build dependency,
> really.
> 
> 
> [*] Would be more if you additionally checked res['error'] exists and is
> a dictionary.  I'm not saying you should, just that checking @res isn't
> None feels odd to me unless you do.
> 

You're right, that would indeed be an improvement.  Right now, if
res['error'] does not exist, the outcome is a test ERROR, instead of a
test FAILure.  That signals that there's a problem with the test (which
technically speaking is true), and depending how you look at it, it
could be said that it casts a shadow over the FAILure.

Having said that, we're aware of many areas in which the framework (both
in core Avocado and in the QEMU supporting code) can be improved.  The
general feeling is that we shouldn't wait until we have a supposedly
perfect environment before start writing tests.  Besides the coverage
value that those tests can bring, the improvement opportunities are best
seen during that process.

Thanks for the review and the very much valid points you raised.
- Cleber.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]