qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [patch v4 12/16] e1000: apply fine lock on e1000


From: Avi Kivity
Subject: Re: [Qemu-devel] [patch v4 12/16] e1000: apply fine lock on e1000
Date: Thu, 25 Oct 2012 18:21:27 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:16.0) Gecko/20121016 Thunderbird/16.0.1

On 10/25/2012 11:31 AM, Jan Kiszka wrote:
> On 2012-10-25 11:01, Avi Kivity wrote:
>> On 10/24/2012 09:17 AM, Jan Kiszka wrote:
>>>>>
>>>>> This is ugly for many reasons. First of all, it is racy as the register
>>>>> content may change while dropping the device lock, no? Then you would
>>>>> raise or clear an IRQ spuriously.
>>>>>
>>>> Device state's intact is protected by busy flag, and will not broken
>>>
>>> Except that the busy flag concept is broken in itself.
>> 
>> How do we fix an mmio that ends up mmio'ing back to itself, perhaps
>> indirectly?  Note this is broken in mainline too, but in a different way.
>> 
>> Do we introduce clever locks that can detect deadlocks?
> 
> That problem is already addressed (to my understanding) by blocking
> nested MMIO in general. 

That doesn't work cross-thread.

vcpu A: write to device X, dma-ing to device Y
vcpu B: write to device Y, dma-ing to device X

My suggestion was to drop the locks around DMA, then re-acquire the lock
and re-validate data.

> The brokenness of the busy flag is that it
> prevents concurrent MMIO by dropping requests.

Right.

> 
>> 
>>> I see that we have a all-or-nothing problem here: to address this
>>> properly, we need to convert the IRQ path to lock-less (or at least
>>> compatible with holding per-device locks) as well.
>> 
>> There is a transitional path where writing to a register that can cause
>> IRQ changes takes both the big lock and the local lock.
>> 
>> Eventually, though, of course all inner subsystems must be threaded for
>> this work to have value.
>> 
> 
> But that transitional path must not introduce regressions. Opening a
> race window between IRQ cause update and event injection is such a
> thing, just like dropping concurrent requests on the floor.

Can you explain the race?


-- 
error compiling committee.c: too many arguments to function



reply via email to

[Prev in Thread] Current Thread [Next in Thread]