qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC] memory: drop _overlap variant


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH RFC] memory: drop _overlap variant
Date: Thu, 14 Feb 2013 16:47:34 +0200

On Thu, Feb 14, 2013 at 02:34:20PM +0000, Peter Maydell wrote:
> On 14 February 2013 14:02, Michael S. Tsirkin <address@hidden> wrote:
> > Well that's the status quo. One of the issues is, you have
> > no idea what else uses each priority. With this change,
> > at least you can grep for it.
> 
> No, because most of the code you find will be setting
> priorities for completely irrelevant containers (for
> instance PCI doesn't care at all about priorities used
> by the v7m NVIC).
> 
> > Imagine the specific example: ioapic and pci devices. ioapic has
> > an address within the pci hole but it is not a subregion.
> > If priority has no meaning how would you decide which one
> > to use?
> 
> I don't know about the specifics of the PC's memory layout,
> but *something* has to manage the address space that is
> being set up. I would expect something like:
> 
>  * PCI host controller has a memory region (container) which
>    all the PCI devices are mapped into as per guest programming
>  * ioapic has a memory region
>  * there is another container which contains both these
>    memory regions. The code that controls and sets up that
>    container [which is probably the pc board model] gets to
>    decide priorities, which are purely local to it

This assumes we set up devices in code.
We are trying to move away from that, and have
APIs that let you set up boards from command line.


> (It's possible that at the moment the "another container" is
> the get_system_memory() system address space. If it makes life
> easier you can always invent another container to give you a
> fresh level of indirection.)
> 
> > Also, on a PC many addresses are guest programmable. We need to behave
> > in some defined way if guest programs addresses to something silly.
> 
> Yes, this is the job of the code controlling the container(s)
> into which those memory regions may be mapped.

Some containers don't know what is mapped into them.

> >> If the guest can
> >> program overlap then presumably PCI specifies semantics
> >> for what happens then, and there need to be PCI specific
> >> wrappers that enforce those semantics and they can call
> >> the relevant _overlap functions when mapping things.
> >> In any case this isn't a concern for the PCI *device*,
> >> which can just provide its memory regions. It's a problem
> >> the PCI *host adaptor* has to deal with when it's figuring
> >> out how to map those regions into the container which
> >> corresponds to its area of the address space.
> >
> > Issue is, a PCI device overlapping something else suddenly
> > becomes this something else's problem.
> 
> Nope, because the PCI host controller model should be in
> complete control of the container all the PCI devices live
> in, and it is the thing doing the mapping and unmapping
> so it gets to set priorities and mark things as OK to
> overlap. Also, memory.c permits overlap if either of the
> two memory regions in question is marked as may-overlap;
> they don't both have to be marked.

That's undocumented, isn't it?
And then which one wins?


> >> > We could add a wrapper for MEMORY_PRIO_LOWEST - will that address
> >> > your concern?
> >>
> >> Well, I'm entirely happy with the memory API we have at
> >> the moment, and I'm trying to figure out why you want to
> >> change it...
> >
> > I am guessing your systems all have hardcoded addresses
> > not controlled by guest.
> 
> Nope. omap_gpmc.c for instance has guest programmable subregions;
> it uses a container so the guest's manipulation of these can't
> leak out and cause weird things to happen to other bits of QEMU.
> [I think we don't implement the correct guest-facing behaviour
> when the guest asks for overlapping regions, but we shouldn't
> hit the memory.c overlapping-region issue, or if we do it's
> a bug to be fixed.]
> 
> There's also PCI on the versatilepb, but PCI devices can't
> just appear anywhere, the PCI memory windows are at known
> addresses and the PCI device can't escape from the wrong
> side of the PCI controller.

But, there are devices who's addresses can overlap the PCI
window.


> >> >> Maybe we should take the printf() about subregion collisions
> >> >> in memory_region_add_subregion_common() out of the #if 0
> >> >> that it currently sits in?
> >>
> >> > This is just a debugging tool, it won't fix anything.
> >>
> >> It might tell us what bits of code are currently erroneously
> >> mapping regions that overlap without using the _overlap()
> >> function. Then we could fix them.
> 
> > When there is a single guest programmable device,
> > any address can be overlapped by it.
> 
> Do we really have an example of a guest programmable
> device where the *device itself* decides where it lives
> in the address space, rather than the guest being able to
> program a host controller/bus fabric/equivalent thing to
> specify where the device should live, or the device
> effectively negotiating with its bus controller? That
> seems very implausible to me just because hardware itself
> generally has some kind of hierarchy of buses and it's not
> really possible for a leaf node to make itself appear
> anywhere in the hierarchy; all it can do is by agreement
> with the thing above it appear at some different address at
> the same level.
> [of course there are trivial systems with a totally flat
> bus but that's just a degenerate case of the above where
> there's only one thing (the board) managing a single
> layer, and typically those systems have everything at
> a fixed address anyhow.]
> 
> -- PMM

x86 APIC seems to be such a device: guest programs it,
it's the first to get to say where it is.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]