qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] QOMification of AXI streams


From: Avi Kivity
Subject: Re: [Qemu-devel] [RFC] QOMification of AXI streams
Date: Tue, 12 Jun 2012 12:46:55 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120605 Thunderbird/13.0

On 06/12/2012 01:29 AM, Anthony Liguori wrote:
>>
>> So it makes some amount of sense to use the same structure. For example,
>> if a device issues accesses, those could be caught by a sibling device
>> memory region... or go upstream.
>>
>> Let's just look at downstream transformation for a minute...
>>
>> We do need to be a bit careful about transformation here: I need to
>> double check but I don't think we do transformation downstream today in
>> a clean way and we'd have to do that. IE. On pseries for example, the
>> PCI host bridge has a window in the CPU address space of [A...A+S], but
>> accesses to that window generates PCI cycles with different addresses
>> [B...B+S] (with typically A and B both being naturally aligned on S so
>> it's just a bit masking in HW).
> 
> I don't know that we really have bit masking done right in the memory API.
> 
> When we add a subregion, it always removes the offset from the address
> when it dispatches.  This more often than not works out well but for
> what you're describing above, it sounds like you'd really want to get an
> adjusted size (that could be transformed).
> 
> Today we generate a linear dispatch table.  This prevents us from
> applying device-level transforms.

We can perform arbitrary transformations to the address during dispatch
(except it_shift style, but I think that should be added).  The blocker
is that we have just a single dispatch table where we should have
several - one for each initiator group (cpus, each bci bus, etc.).


> 
>> We somewhat implements that in spapr_pci today since it works but I
>> don't quite understand how :-) Or rather, the terminology "alias" seems
>> to be fairly bogus, we aren't talking about aliases here...
>>
>> So today we create a memory region with an "alias" (whatever that means)
>> that is [B...B+S] and add a subregion which is [A...A+S]. That seems to
>> work but but it's obscure.
>>
>> If I was to implement that, I would make it so that the struct
>> MemoryRegion used in that hierarchy contains the address in the local
>> domain -and- the transformed address in the CPU domain, so you can still
>> sort them by CPU addresses for quick access and make this offsetting a
>> standard property of any memory region since it's very common that
>> busses drop address bits along the way.
>>
>> Now, if you want to use that structure for DMA, what you need to do
>> first is when an access happens, walk up the region tree and scan for
>> all siblings at every level, which can be costly.
> 
> So if you stick with the notion of subregions, you would still have a
> single MemoryRegion at the PCI bus layer that has all of it's children
> as sub regions.  Presumably that "scan for all siblings" is a binary
> search which shouldn't really be that expensive considering that we're
> likely to have a shallow depth in the memory hierarchy.

We can just render the memory hierarchy from the point of the devices.
This is needed in cases we don't support dynamic dispatch (virtual iommu
that is implemented by host hardware), and more efficient elsewhere.

> 
>>
>> Additionally to handle iommu's etc... you need the option for a given
>> memory region to have functions to perform the transformation in the
>> upstream direction.
> 
> I think that transformation function lives in the bus layer
> MemoryRegion.  It's a bit tricky though because you need some sort of
> notion of "who is asking".  So you need:
> 
> dma_memory_write(MemoryRegion *parent, DeviceState *caller,
>                  const void *data, size_t size);

It is not the parent here, but rather the root of the memory hierarchy
as viewed from the device (the enigmatically named 'pcibm' above).  The
pci memory region simply doesn't have the information about where system
memory lives, because it is a sibling region.

Note that the address transformations are not necessarily symmetric (for
example, iommus transform device->system transactions, but not
cpu->device transactions).  Each initiator has a separate DAG to follow.

> 
> This could be simplified at each layer via:
> 
> void pci_device_write(PCIDevice *dev, const void *data, size_t size) {
>     dma_memory_write(dev->bus->mr, DEVICE(dev), data, size);
> }
> 
>> To be true to the HW, each bridge should have its memory region, so a
>> setup with
>>
>>        /pci-host
>>            |
>>            |--/p2p
>>                 |
>>            |--/device
>>
>> Any DMA done by the device would walk through the p2p region to the host
>> which would contain a region with transform ops.
>>
>> However, at each level, you'd have to search for sibling regions that
>> may decode the address at that level before moving up, ie implement
>> essentially the equivalent of the PCI substractive decoding scheme.
> 
> Not quite...  subtractive decoding only happens for very specific
> devices IIUC.  For instance, an PCI-ISA bridge.  Normally, it's positive
> decoding and a bridge has to describe the full region of MMIO/PIO that
> it handles.
> 
> So it's only necessary to transverse down the tree again for the very
> special case of PCI-ISA bridges.  Normally you can tell just by looking
> at siblings.
> 
>> That will be a significant overhead for your DMA ops I believe, though
>> doable.
> 
> Worst case scenario, 256 devices with what, a 3 level deep hierarchy? 
> we're still talking about 24 simple address compares.  That shouldn't be
> so bad.

Or just lookup the device-local phys_map.

> 
>> Then we'd have to add map/unmap to MemoryRegion as well, with the
>> understanding that they may not be supported at every level...
> 
> map/unmap can always fall back to bounce buffers.
> 
>> So yeah, it sounds doable and it would handle what DMAContext doesn't
>> handle which is access to peer devices without going all the way back to
>> the "top level", but it's complex and ... I need something in qemu
>> 1.2 :-)
> 
> I think we need a longer term vision here.  We can find incremental
> solutions for the short term but I'm pretty nervous about having two
> parallel APIs only to discover that we need to converge in 2 years.

The API already exists, we just need to fill up the data structures.

-- 
error compiling committee.c: too many arguments to function





reply via email to

[Prev in Thread] Current Thread [Next in Thread]