[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH v3 3/6] block: Introduce bdrv_dma_m

From: Paolo Bonzini
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH v3 3/6] block: Introduce bdrv_dma_map and bdrv_dma_unmap
Date: Fri, 14 Jul 2017 15:46:37 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0

On 14/07/2017 15:37, Stefan Hajnoczi wrote:
> On Wed, Jul 12, 2017 at 04:03:57PM +0200, Paolo Bonzini wrote:
>> On 12/07/2017 03:07, Fam Zheng wrote:
>>> On Tue, 07/11 12:28, Paolo Bonzini wrote:
>>>> On 11/07/2017 12:05, Stefan Hajnoczi wrote:
>>>>> On Mon, Jul 10, 2017 at 05:08:56PM +0200, Paolo Bonzini wrote:
>>>>>> On 10/07/2017 17:07, Stefan Hajnoczi wrote:
>>>>>>> On Wed, Jul 05, 2017 at 09:36:32PM +0800, Fam Zheng wrote:
>>>>>>>> Allow block driver to map and unmap a buffer for later I/O, as a 
>>>>>>>> performance
>>>>>>>> hint.
>>>>>>> The name blk_dma_map() is confusing since other "dma" APIs like
>>>>>>> dma_addr_t and dma_blk_io() deal with guest physical addresses instead
>>>>>>> of host addresses.  They are about DMA to/from guest RAM.
>>>>>>> Have you considered hiding this cached mapping in block/nvme.c so that
>>>>>>> it isn't exposed?  block/nvme.c could keep the last buffer mapped and
>>>>>>> callers would get the performance benefit without a new blk_dma_map()
>>>>>>> API.
>>>>>> One buffer is enough for qemu-img bench, but not for more complex cases
>>>>>> (e.g. fio).
>>>>> I don't see any other blk_dma_map() callers.
>>>> Indeed, the fio plugin is not part of this series, but it also used
>>>> blk_dma_map.  Without it, performance is awful.
>>> How many buffers does fio use, typically? If it's not too many, 
>>> block/nvme.c can
>>> cache the last N buffers. I'm with Stefan that hiding the mapping logic from
>>> block layer callers makes a nicer API, especially such that qemu-img is much
>>> easier to maintain good performance across subcommmands.
>> It depends on the queue depth.
>> I think the API addition is necessary, otherwise we wouldn't have added
>> the RAMBlockNotifier which is a layering violation that does the same
>> thing (create permanent HVA->IOVA mappings).  In fact, the
>> RAMBlockNotifier could be moved out of nvme.c and made to use
>> blk_dma_map/unmap, though I'm not proposing to do it now.
>> I don't think qemu-img convert and dd are impacted by IOMMU map/unmap as
>> heavily as bench, because they operate with queue depth 1.  But adding
>> map/unmap there would not be hard.
> I'm not against an API existing for this.  I would just ask:
> 1. It's documented so the purpose and semantics are clear.
> 2. The name cannot be confused with dma-helpers.c APIs.

Yes, I agree completely.

> Maybe blk_register_buf() or blk_add_buf_hint()?

blk_(un)register_buf, or perhaps iobuf or io_buffer, sounds good to me.


Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]