[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 10/10] iotests/030: Unthrottle parallel jobs in reverse

From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH v2 10/10] iotests/030: Unthrottle parallel jobs in reverse
Date: Fri, 12 Nov 2021 19:25:37 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0

11.11.2021 15:08, Hanna Reitz wrote:
See the comment for why this is necessary.

Signed-off-by: Hanna Reitz <hreitz@redhat.com>
  tests/qemu-iotests/030 | 11 ++++++++++-
  1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index 5fb65b4bef..567bf1da67 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -251,7 +251,16 @@ class TestParallelOps(iotests.QMPTestCase):
              self.assert_qmp(result, 'return', {})
- for job in pending_jobs:
+        # Do this in reverse: After unthrottling them, some jobs may finish
+        # before we have unthrottled all of them.  This will drain their
+        # subgraph, and this will make jobs above them advance (despite those
+        # jobs on top being throttled).  In the worst case, all jobs below the
+        # top one are finished before we can unthrottle it, and this makes it
+        # advance so far that it completes before we can unthrottle it - which
+        # results in an error.
+        # Starting from the top (i.e. in reverse) does not have this problem:
+        # When a job finishes, the ones below it are not advanced.

Hmm, interesting why only jobs above the finished job may advance in the 

Looks like something may change and this workaround will stop working.

Isn't it better just handle the error, and don't care if job was just finished?

Something like

if result['return'] != {}:
   # Job was finished during drain caused by finish of already unthrottled job
   self.assert_qmp(result, 'error/class', 'DeviceNotActive')

Next thing in the test case is checking for completion events, so we'll get all 
events anyway.

+        for job in reversed(pending_jobs):
              result = self.vm.qmp('block-job-set-speed', device=job, speed=0)
              self.assert_qmp(result, 'return', {})

Best regards,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]