qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC v2 2/2] hw/pci: handle unassigned pci addres


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH RFC v2 2/2] hw/pci: handle unassigned pci addresses
Date: Sun, 15 Sep 2013 15:17:50 +0300

On Sun, Sep 15, 2013 at 12:23:41PM +0100, Peter Maydell wrote:
> On 15 September 2013 12:05, Michael S. Tsirkin <address@hidden> wrote:
> > On Sun, Sep 15, 2013 at 11:56:40AM +0100, Peter Maydell wrote:
> >> The alias will win for the addresses it handles. But if
> >> the alias is a container with "holes" then it doesn't handle
> >> the "holes" and the lower priority background region will
> >> get them.
> 
> > Confused. How can there be a container with holes?
> 
> You just create a container memory region with size,
> say 0x8000, and map subregions into it which
> cover, say, 0x0-0xfff and 0x2000-0x3fff. Then the
> remaining area 0x1000-0x1fff and 0x4000-0x7fff
> isn't covered by anything.
> 
> > Imagine this configuration:
> >
> > region B - subregion of A, from 0x1000 to 0x3000
> > region C - subregion of A, from 0x2000 to 0x4000
> >
> > region D - subregion of B from offset 0 to 0x1000
> >
> > If B has higher priority that C, then part of C
> > from 0x2000 to 0x3000 is hidden, even though B
> > is a container and there's no subregion of B covering
> > that address range.
> 
> No, unless you've given B itself I/O operations by
> creating it with memory_region_init_io() [or _ram,
> _rom_device or _iommu, but giving those subregions
> is pretty weird]

Is this allowed then?
If not maybe we should add an assert.

>. If it's a "pure container" then it
> doesn't respond for areas that none of its subregions
> cover (it can't, it has no idea what it should do).

Interesting. This really is completely undocumented
though.

documentation merely says:
        specifies a priority that allows the core to decide which of two 
regions at
        the same address are visible (highest wins)

which makes one think the only thing affecting
visibility is the priority.


I guess it's just like the other weird rule
which says that only the start address of
the transaction is used to select a region.

> The code that implements this is the recursive
> function memory.c:render_memory_region(),
> which is what flattens the MemoryRegion hierarchy
> into a flat view of "what should each part of this
> address space do?".
> 
> In your example, we start by calling render_memory_region()
> to render A into our FlatView. To do this we render each
> subregion of A in priority order, so that's B then C.
> To render B, since it's also a container, we render each
> of its subregions. That means just D, so we add D's
> I/O operations to the FlatView at addresses 0x1000..0x1fff.
> Then we're done rendering B, because it has no I/O
> ops of its own (mr->terminates is false).
> Next up, render C. No subregions, so just render itself
> into the FlatView. When we are working out if we can
> put it into the FlatView, already claimed areas of the
> FlatView take precedence. But the only thing there is
> the 0x1000..0x1fff, so all of 0x2000..0x3fff is free and
> we put C's I/O ops there.
> Then we're done, because there are no I/O ops for A.
> 
> The key point I think is that when we're doing the "can
> I put this thing here?" check we're checking against the
> FlatView as populated so far, not against sibling
> MemoryRegions. Note also that we can handle the
> case where we have a MemoryRegion that in the
> FlatView is split into two pieces because of a preexisting
> section which has already been assigned.

Okay, I missed this part. But we can't put this
in the document I think this is too tied to
a specific implementation.
I wonder whether there's a way to describe this
in terms that dont expose the implementation of
render_memory_region, somehow.

Maybe like follows:
when multiple regions cover the same address only one region is going to
"win" and get invoked for an access.
The winner can be determined as follows:
- "pure container" regions created with memory_region_init(..)
   are ignored
- if multiple non-container regions cover an address, the winner is
  determined using a priority vector, built of priority field values
  from address space down to our region (i.e. region priority, followed by
  a subregion priority, followed by a sub subregion priority etc).

These priority vectors are compared as follows:

- if vectors are identical, which wins is undefined
- otherwise if one vector is a sub-vector of another,
  which wins is undefined
- otherwise the first vector in the lexicographical
  order wins



> Mostly this doesn't come up because you don't need
> to play games with overlapping memory regions and
> containers very often: the common case is "nothing
> overlaps at all". But the facilities are there if you need
> them.
> 
> -- PMM

Dynamic regions like PI BARs are actually very common,
IMO this means overlapping case is actually very common.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]