qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] ahci: fix FIS I bit and PIO Setup FIS interrupt


From: John Snow
Subject: Re: [Qemu-devel] [PATCH] ahci: fix FIS I bit and PIO Setup FIS interrupt
Date: Fri, 22 Jun 2018 12:26:57 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0


On 06/22/2018 04:58 AM, Paolo Bonzini wrote:
> On 21/06/2018 22:06, John Snow wrote:
>>
>> On 06/20/2018 09:25 AM, Paolo Bonzini wrote:
>>> +    pio_fis_i = is_atapi ? ad->done_atapi_packet : !is_write;
>> Per DPIOO1, does this go to false for the first DRQ block, or did I
>> misunderstand? Currently my understanding:
> 
> DPIOO1 is the !is_atapi && is_write case, where I is currently always 0.
>  When do we have more than one DRQ block, is it for multi-sector PIO
> reads?  Then perhaps we need something like ad->command->done_first_pio.
> 
> Paolo
> 

cmd_read_pio: req_nb_sectors = 1
ide_sector_read:
        sector_num = ide_get_sector(s) (LBA offset)
        n = s->nsector (1 or more sectors)
        but then we clamp it to s->req_nb_sectors, which is 1 here,
        then we build an SGlist pointed to s->io_buffer;

s->io_buffer_total_len = IDE_DMA_BUF_SECTORS*512 + 4;

Oh, actually our buffer here is quite big, 256 sectors plus four extra
bytes that Fabrice never explained.

Max request size for lba28 is going to be 256 sectors on the button, but
64K for lba48. I don't remember immediately if there is some
spec-mandated limit on how large a single DRQ block can be for PATA or SATA.

IDENTIFY Word 47 specifies how many for READ/WRITE Multiple, so I'm
intuiting here that READ/WRITE implicitly mandate one sector per DRQ block.

>> - device->host
>>      DPIOI1
>>      Interrupt bit shall be set.
>> - host->device:
>>      DPIOO1:
>>      0 for first block, 1 otherwise
>> - ATAPI:
>>      0 for packet itself
>>      1 for all data otherwise.
> 

-- 
—js



reply via email to

[Prev in Thread] Current Thread [Next in Thread]