qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] coroutine-ucontext broken for x86-32


From: Anthony Liguori
Subject: Re: [Qemu-devel] coroutine-ucontext broken for x86-32
Date: Wed, 09 May 2012 12:17:30 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120329 Thunderbird/11.0.1

On 05/09/2012 06:38 AM, Jan Kiszka wrote:
On 2012-05-09 08:15, Peter Maydell wrote:
On 9 May 2012 11:11, Kevin Wolf<address@hidden>  wrote:
Am 08.05.2012 21:35, schrieb Jan Kiszka:
I hunted down a fairly subtle corruption of the VCPU thread signal mask
in KVM mode when using the ucontext version of coroutines:

coroutine_new calls getcontext, makecontext, swapcontext. Those
functions get/set also the signal mask of the caller. Unfortunately,
they only use the sigprocmask syscall on i386, not the rt_sigprocmask
version. So they do not properly save/restore the blocked RT signals,
namely our SIG_IPI - it becomes unblocke this way.

If other coroutine backends work (sigaltstack?), we could try to detect
the situation in configure and set the right default. Not sure what the
condition is, glibc + i386?

I don't think you can do a compile-time test for this short of
just disabling use of the ucontext code on all i386/Linux platforms.

I think it's becoming increasingly obvious that the setcontext/getcontext
code path is not very well used and prone to nasty libc bugs. Trying
to implement coroutines in C is just a really bad idea and I think
we should be trying to reduce our use of them if we possibly can,
presumably by switching to actually using threads where we really
need the parallelism.

I tend to agree.

FWIW, sigaltstack works around the issue here, but I'm still looking s
bit skeptical at its implementation.

Is there any downside to using SIGUSR1?

Regards,

Anthony Liguori


Jan





reply via email to

[Prev in Thread] Current Thread [Next in Thread]