qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PULL 0/7] Block/Multiboot patches for 2.10.0-rc3


From: Peter Maydell
Subject: Re: [Qemu-block] [PULL 0/7] Block/Multiboot patches for 2.10.0-rc3
Date: Fri, 11 Aug 2017 18:10:35 +0100

On 11 August 2017 at 15:05, Kevin Wolf <address@hidden> wrote:
> The following changes since commit 95766c2cd04395e5712b4d5967b3251f35d537df:
>
>   Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' 
> into staging (2017-08-10 18:53:39 +0100)
>
> are available in the git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to 8565c3ab537e78f3e69977ec2c609dc9417a806e:
>
>   qemu-iotests: fix 185 (2017-08-11 14:44:39 +0200)
>
> ----------------------------------------------------------------
> Block layer patches for 2.10.0-rc3
>
> ----------------------------------------------------------------

I get an intermittent failure on aarch64 test-aio-multithread:

MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}
gtester -k --verbose -m=quick tests/test-aio-multithread
TEST: tests/test-aio-multithread... (pid=19863)
  /aio/multi/lifecycle:                                                OK
  /aio/multi/schedule:                                                 OK
  /aio/multi/mutex/contended:                                          OK
  /aio/multi/mutex/handoff:                                            OK
  /aio/multi/mutex/mcs:                                                **
ERROR:/home/pm215/qemu/tests/test-aio-multithread.c:368:test_multi_fair_mutex:
assertion failed (counter == atomic_counter): (343406 == 343407)
FAIL
GTester: last random seed: R02S227b39277b8c54976c98f0e990305851
(pid=21145)
  /aio/multi/mutex/pthread:                                            OK
FAIL: tests/test-aio-multithread


but I've pushed this to master on the optimistic assumption that
it's not the fault of anything in this pullreq... (will
investigate further)

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]