qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC] memory: drop _overlap variant


From: Avi Kivity
Subject: Re: [Qemu-devel] [PATCH RFC] memory: drop _overlap variant
Date: Thu, 14 Feb 2013 16:23:29 +0200

On Thu, Feb 14, 2013 at 4:02 PM, Michael S. Tsirkin <address@hidden> wrote:
> On Thu, Feb 14, 2013 at 01:22:18PM +0000, Peter Maydell wrote:
>> On 14 February 2013 13:09, Michael S. Tsirkin <address@hidden> wrote:
>> > On Thu, Feb 14, 2013 at 12:56:02PM +0000, Peter Maydell wrote:
>> >> Up to the parent which controls the region being mapped into.
>> >
>> > We could just assume same priority as parent
>>
>> Er, no. I mean the code in control of the parent MR sets the
>> priority, when it calls memory_region_add_subregion_overlap().
>>
>> > but what happens if it
>> > has to be different? There are also aliases so a region
>> > can have multiple parents.
>>
>> The alias has its own priority.
>
> Well that's the status quo. One of the issues is, you have
> no idea what else uses each priority. With this change,
> at least you can grep for it.

The question "what priorities do aliases of this region have" is not
an interesting question.  Priority is a local attribute, not an
attribute of the region being prioritized.

>
>> > Presumably it will have to have
>> > different priorities depending on what the parent does?
>> > Here's a list of instances using priority != 0.
>> >
>> > hw/armv7m_nvic.c:                                MEMORY_PRIO_LOW);
>>
>> So this one I know about, and it's a good example of what
>> I'm talking about. This function sets up a container memory
>> region ("nvic"), and it is in total control of what is
>> mapped into that container. Specifically, it puts in a
>> "nvic_sysregs" background region which covers the whole
>> 0x1000 size of the container (at an implicit priority of
>> zero). It then layers over that an alias of the GIC
>> registers ("nvic-gic") at a specific address and with
>> a priority of 1 so it appears above the background region.
>> Nobody else ever puts anything in this container, so
>> the only thing we care about is that the priority of
>> the nvic-gic region is higher than that of the nvic_sysregs
>> region; and it's clear from the code that we do that.
>> Priority is a local question whose meaning is only relevant
>> within a particular container region, not system-wide,
>> and
>> having system-wide MEMORY_PRIO_ defines obscures that IMHO.
>
> Well that's not how it seems to work, and I don't see how it *could*
> work. Imagine the specific example: ioapic and pci devices. ioapic has
> an address within the pci hole but it is not a subregion.
> If priority has no meaning how would you decide which one
> to use?

Like PMM said.  You look at the semantics of the hardware, and map
that onto the API.  If the pci controller says that BARs hide the
ioapic, then you give them higher priority.  If it says that the
ioapic hides BARs, then that gets higher priority.  If it doesn't say
anything, take your pick (or give them the same priority).

>
> Also, on a PC many addresses are guest programmable. We need to behave
> in some defined way if guest programs addresses to something silly.

That's why _overlap exists.

> The only reason it works sometimes is because some systems
> use fixes addresses which never overlap.

That's why the no overlap API exists.

>
>>
>> >> I definitely don't like making the priority argument mandatory:
>> >> this is just introducing pointless boilerplate for the common
>> >> case where nothing overlaps and you know nothing overlaps.
>> >
>> > Non overlapping is not a common case at all.  E.g. with normal PCI
>> > devices you have no way to know nothing overlaps - addresses are guest
>> > programmable.
>>
>> That means PCI is a special case :-)
>> If the guest can
>> program overlap then presumably PCI specifies semantics
>> for what happens then, and there need to be PCI specific
>> wrappers that enforce those semantics and they can call
>> the relevant _overlap functions when mapping things.
>> In any case this isn't a concern for the PCI *device*,
>> which can just provide its memory regions. It's a problem
>> the PCI *host adaptor* has to deal with when it's figuring
>> out how to map those regions into the container which
>> corresponds to its area of the address space.
>
> Issue is, a PCI device overlapping something else suddenly
> becomes this something else's problem.

It is not a problem at all.

>
>> >> Maybe we should take the printf() about subregion collisions
>> >> in memory_region_add_subregion_common() out of the #if 0
>> >> that it currently sits in?
>>
>> > This is just a debugging tool, it won't fix anything.
>>
>> It might tell us what bits of code are currently erroneously
>> mapping regions that overlap without using the _overlap()
>> function. Then we could fix them.
>>
>> -- PMM
>
> When there is a single guest programmable device,
> any address can be overlapped by it.

No.  Only addresses within the same container.  Other containers work
fine without overlap.

> We could invent rules like 'non overlappable is higher
> priority' but it seems completely arbitrary, a single
> priority is clearer.

It's just noise for the xx% of cases which don't need it.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]