[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCH 2 of 5] add can_dma/post_dma for direct IO

From: Anthony Liguori
Subject: Re: [Qemu-devel] Re: [PATCH 2 of 5] add can_dma/post_dma for direct IO
Date: Sat, 13 Dec 2008 15:07:28 -0600
User-agent: Thunderbird (X11/20080925)

Avi Kivity wrote:
Anthony Liguori wrote:
- DMA into mmio regions; this requires bouncing
The map() API I proposed above should do bouncing to MMIO regions.  To
deal with unbounded allocation, you can simply fail when the mapping
allocation has reached some high limit.  Calling code needs to cope
with the fact that map'ing may succeed or fail.

There are N users of this code, all of which would need to cope with the
failure.  Or there could be one user (dma.c) which handles the failure
and the bouncing.

N should be small long term. It should only be for places that would interact directly with CPU memory. This would be the PCI BUS, the ISA BUS, some speciality devices, and possibly virtio (although you could argue it should go through the PCI BUS).

map() has to fail and that has nothing to do with bouncing or not bouncing. In the case of Xen, you can have a guest that has 8GB of memory, and you only have 2GB of virtual address space. If you try to DMA to more than 2GB of memory, there will be a failure. Whoever is accessing memory directly in this fashion needs to cope with that.

dma.c _is_ a map/unmap api, except it doesn't expose the mapped data,
which allows it to control scheduling as well as be easier to use.

As I understand dma.c, it performs the following action: map() as much as possible, call an actor on mapped memory, repeat until done, signal completion.

As an abstraction, it may be useful. I would argue that it should be a bit more generic though. It should take a function pointer for map and unmap too, and then you wouldn't need N versions of it for each different type of API.

Right, but who would it notify?

We need some place that can deal with this, and it isn't
_map()/_unmap(), and it isn't ide.c or scsi.c.

The pattern of try to map(), do IO, unmap(), repeat only really works for block IO. It doesn't really work for network traffic. You have to map the entire packet and send it all at once. You cannot accept a partial mapping result. The IO pattern to send an IO packet is much simpler: try to map the packet, if the mapping fails, either wait until more space frees up or drop the packet. For the other uses of direct memory access, like kernel loading, the same is true.

What this is describing is not a DMA API. It's a very specific IO pattern. I think that's part of what's causing confusion in this series. It's certainly not at all related to PCI DMA.

I would argue, that you really want to add a block driver interface that takes the necessary information, and implements this pattern but that's not important. Reducing code duplication is a good thing so however it ends up working out is fine.


Anthony Liguori

reply via email to

[Prev in Thread] Current Thread [Next in Thread]