On Thu, Jan 24, 2019 at 10:11:55AM +0000, Daniel P. Berrangé wrote:
On Wed, Jan 23, 2019 at 07:53:46PM +0000, Dr. David Alan Gilbert wrote:
* Jason Wang (address@hidden) wrote:
On 2019/1/22 上午2:56, Peter Maydell wrote:
On Thu, 17 Jan 2019 at 09:46, Jason Wang <address@hidden> wrote:
On 2019/1/15 上午12:33, Zhang Chen wrote:
On Sat, Jan 12, 2019 at 12:15 AM Dr. David Alan Gilbert
<address@hidden <mailto:address@hidden>> wrote:
* Peter Maydell (address@hidden
<mailto:address@hidden>) wrote:
> Recently I've noticed that test-filter-mirror has been hanging
> intermittently, typically when run on some other TCG architecture.
> In the instance I've just looked at, this was with s390x guest on
> x86-64 host, though I've also seen it on other host archs and
> perhaps with other guests.
Watch out to see if you really do see it for other guests;
it carefully avoids using virtio-net to avoid vhost; but on s390x it
uses virtio-net-ccw - could that hit the vhost it was trying to avoid?
> Below is a backtrace, though it seems to be pretty unhelpful.
> Anybody got any theories ? Does the mirror test rely on dirty
> memory bitmaps like the migration test (which also hangs
> occasionally with TCG due to some bug I'm sure we've investigated
> in the past) ?
I don't think it relies on the CPU at all.
I have no idea about this currently, but Jason and I designed the
test case.
Add Jason: Have any comments about this ?
I can't reproduce this locally with s390x-softmmu. It looks to me the
test should be independent to any kinds of emulation. It should pass
when mainloop work.
I've just seen a hang with ppc64 guest on s390x host, so it is
indeed not specific to s390x guest (and so not specific to
virtio-net either, since the ppc64 guest setup uses e1000).
thanks
-- PMM
Finally reproduced locally after hundreds (sometimes thousands) times of
running.
Bisection points to OOB monitor[1].
It looks to me after OOB is used unconditionally we lose a barrier to make
sure socket is connected before sending packets in test-filter-mirror.c. Is
there any other similar and simple thing that we could do to kick the
mainloop?
Do you mean the:
/* send a qmp command to guarantee that 'connected' is setting to true. */
qmp_discard_response(qts, "{ 'execute' : 'query-status'}");
why was that ever sufficient to know the socket was ready?
This doesn't make any sense to me.
There's the netdev socket, which has been passed in as a pre-opened socket
FD, so that's guaranteed connected.
There's the chardev server socket, to which we've just done a unix_connect()
call to establish a connection. If unix_connect() has succeeded, then at least
the socket is connected & ready for I/O from the test's side. This is a
reliable stream socket, so even if the test sends data on the socket right away
and QEMU isn't ready, it won't be lost. It'll be buffered and received by QEMU
as soon as QEMU starts to monitor for incoming data on the socket.
So I don't get what trying to wait for a "connected" state actually achieves.
It feels like a mistaken attempt to paper over some other unknown flaw that
just worked by some lucky side-effect.
Immediately after writing that, I see what's happened.
The filter_redirector_receive_iov() method is triggered when QEMU reads
from the -netdev socket (which we passed in as an FD and immediately
write to).
This method will discard all data, however, if the chr_out -chardev is
not in a connected state. So we do indeed have a race condition in this
test suite.
In fact I'd say this filter-mirror object is racy by design even when
run in normal usage, if your chardev is a server mode with "nowait" set,
or is a client mode with "reconnect" set. It will simply discard data.
If we care about the race in real QEMU execution, then we must either
document that "nowait" or "reconnect" should never be used with
filter-mirror, or perhaps can make use of "qemu_chr_wait_connected"
to synchronize startup fo the filter-mirror object with the chardev
initialization. That could fix the test suite too