[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] exec: Safe work in quiescent state

From: Sergey Fedorov
Subject: Re: [Qemu-devel] exec: Safe work in quiescent state
Date: Wed, 15 Jun 2016 22:16:59 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0

On 15/06/16 17:56, Alex Bennée wrote:
> Sergey Fedorov <address@hidden> writes:
> Just some quick comments for context:
>> Alex's reiteration of Fred's approach [2]:
>> - maintains a single global safe work queue;
> Having separate queues can lead to problems with draining queues as only
> queue gets drained at a time and some threads exit more frequently than
> others.

I think it can't happen if we drain all the queues from all the CPUs, as
we should. The requirement is: stop all the CPUs and process all the
pending work. If we follow this requirement, I think it's not important
whether we have separate queues for each CPU or just a single global queue.

>> - uses GArray rather than linked list to implement the work queue;
> This was to minimise g_malloc on job creation and working through the
> list. An awful lot of jobs just need the CPU id and a single parameter.
> This is why I made it the simple case.

I think it would be nice to avoid g_malloc but don't use an array at the
same time. I have some thoughts how to do this easily, let's see the
code ;-)

>> - introduces a global counter of CPUs which have entered their execution
>> loop;
>> - makes use of the last CPU exited its execution loop to drain the safe
>> work queue;
> I suspect you can still race with other deferred work as those tasks are
> being done outside the exec loop. This should be fixable though.

Will keep an eye on this, thanks.

>> - still does not support user-mode emulation.
> There is not particular reason it couldn't. However it would mean
> updating the linux-user cpu_exec loop which most likely needs a good
> clean-up and re-factoring to avoid making the change to $ARCH loops.

Yes, you are right, I just fixed the facts here :)

Kind regards,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]