[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] about correctness of IDE emulation

From: Huaicheng Li (coperd)
Subject: Re: [Qemu-devel] about correctness of IDE emulation
Date: Wed, 13 Apr 2016 02:25:31 -0500

> On Mar 14, 2016, at 10:09 PM, Huaicheng Li <address@hidden> wrote:
>> On Mar 13, 2016, at 8:42 PM, Fam Zheng <address@hidden> wrote:
>> On Sun, 03/13 14:37, Huaicheng Li (coperd) wrote:
>>> Hi all, 
>>> What I’m confused about is that:
>>> If one I/O is too large and may need several rounds (say 2) of DMA 
>>> transfers,
>>> it seems the second round transfer begins only after the completion of the
>>> first part, by reading data from **IDEState**. But the IDEState info may 
>>> have
>>> been changed by VCPU threads (by writing new I/Os to it) when the first
>>> transfer finishes. From the code, I see that IDE r/w call back function will
>>> continue the second transfer by referencing IDEState’s information. Wouldn’t
>>> this be problematic? Am I missing anything here?
>> Can you give an concrete example? I/O in VCPU threads that changes IDEState
>> must also take care of the DMA transfers, for example ide_reset() has
>> blk_aio_cancel and clears s->nsectors. If an I/O handler fails to do so, it 
>> is
>> a bug.
>> Fam
> I get it now. ide_exec_cmd() can only proceed when BUSY_STAT|DRQ_STAT is not 
> set.
> When the 2nd DMA transfer continues, BUSY_STAT | DRQ_STAT is already
> set, i.e., no other new ide_exec_cmd() can enter. BSUY or DRQ is removed only 
> when
> all DMA transfers are done, after which new writes to IDE are allowed. Thus 
> it’s safe.
> Thanks, Fam & Stefan.

Hi all, I have some further puzzles about IDE emulation:

  (1). IDE can only handle I/Os one by one.  So in the AIO queue there will 
always be only
 **ONE** I/O from this IDE, right? For the bigs I/Os which need to be spliced 
into several 
rounds of DMA transfers, they are also served one by one. (after one DMA 
transfer [as an
AIO] is finished, another DMA transfer will be submitted and so on).  Here I 
want to convey
that there is no batch submission in IDE path at all. True?
  (2). When the guest kernel prepares to do a big I/O which need multiple 
rounds of  DMA 
transfers, will each DMA transfer round (one PRD entry) be trapped and trigger 
one IDE 
emulation, or IDE will handle all the PRD in one shot? 
  (3). I traced the execution of my guest application with big I/Os (each time 
reads 2MB),
then in the IDE layer, I found that it’s splitted into 512KB chunks for each 
DMA transfer. 
Why is 512KB here?? From the BMDMA spec, PRD table can at most represent 
= 8192 buffers, each of which can be a at most 64KB continuous buffer. This 
would give
us 8192*64KB=512MB for each DMA. 

Am I missing anything here?  

Thanks for your attention.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]