qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Memory API


From: Avi Kivity
Subject: Re: [Qemu-devel] [RFC] Memory API
Date: Sun, 22 May 2011 10:50:22 +0300
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.1.10-1.fc14 Thunderbird/3.1.10

On 05/20/2011 02:25 PM, Gleb Natapov wrote:
>
>  A) Removing regions will change significantly. So far this is done by
>  setting a region to IO_MEM_UNASSIGNED, keeping truncation. With the new
>  API that will be a true removal which will additionally restore hidden
>  regions.
>
And what problem do you expect may arise from that? Currently accessing
such region after unassign will result in undefined behaviour, so this
code is non working today, you can't make it worse.


If the conversion were perfect then yes. However there is a possibility that the conversion will not be perfect.

It's also good to have to have the code document its intentions. If you see _overlap() you know there is dynamic address decoding going on, or something clever.

>  B) Uncontrolled overlapping is a bug that should be caught by the core,
>  and a new API is a perfect chance to do this.
>
Well, this will indeed introduce the difference in behaviour :) The guest
that ran before will abort now. Are you actually aware of any such
overlaps in the current code base?

Put a BAR over another BAR, then unmap it.

But if priorities are gona stay why not fail if two regions with the
same priority overlap? If that happens it means that the memory creation
didn't pass the point where conflict should have been resolved (by
assigning different priorities) and this means that overlap is
unintentional, no?

It may be intentional, as in the case of PCI, or PAM (though you can do PAM without priorities, by removing all but one of the subregions in the area).

I am starting to see how you can represent all this local decisions as
priority numbers and then travel this weighted tree to find what memory
region should be accessed (memory registration _has_ to be hierarchical
for that to work in meaningful way).

Priorities don't travel up the tree. They're used to resolve local conflicts *only*.

  I still don't see why it is better
than flattening the tree in the point of conflict.

How do you decide which subregion wins?
>  Not necessarily. It depends on how much added value buses like PCI or
>  ISA or whatever can offer for managing I/O regions. For some purposes,
>  it may as well be fine to just call the memory_* service directly and
>  pass the result of some operation to the bus API later on.
Depend on what memory_* service you are talking about. Just creating
unattached memory region is OK. But if two independent pieces of code
want to map two different memory regions into the same phys address I do
not see who will resolve the conflict.

They have to ask the bus to _add_subregion(). Only the bus knows about the priorities (or the bus can ask them to create the subregions).

>
>  >  PCI
>  >  device will call PCI subsystem. PCI subsystem, instead of assigning
>  >  arbitrary priorities to all overlappings,
>
>  Again: PCI will _not_ assign arbitrary priorities but only
>  MEMORY_REGION_DEFAULT_PRIORITY, likely 0.
That is as arbitrary as it can get. Just assigning
MEMORY_REGION_DEFAULT_PRIORITY/2^0xfff will work equally well, so what
is not arbitrary about that number?

That's just splitting hairs. Array indexes start from zero, an arbitrary but convenient number.

BTW why wouldn't PCI layer assign different priorities to overlapping
regions to let the core know which one should be actually available? Why
leave this decision to the core if it is clearly belongs to PCI?

You mean overlapping BARs? If PCI wants BAR 1 to override BAR 2, then it can indicate it with priorities. If it doesn't want to, it can use the same priority for all regions.

>
>  That does not specify how the PCI bridge or the chipset will tell that
>  overlapping resolution lib _how_ overlapping regions shall be translated
>  into a flat representation. And precisely here come priorities into
>  play. It is the way to tell that lib either "region A shall override
>  region B" if A has higher prio or "if region A and B overlap, do
>  whatever you want" if both have the same prio.
>
Yep! And the question is why shouldn't this be done on the level that
knows most about the conflict but propagated to the core. I am not
arguing that priorities do not exists! Obviously they are. I am
questioning the usefulness of priorities be part of memory core API.


The chipset knows about the priorities. How to communicate them to the core?

- at runtime, with hierarchical dispatch of ->read() and ->write(): slow, and doesn't work at all for RAM.
- using registration order: fragile
- using priorities

We need to get the information out of the chipset and into the core, so the core can make use of it (like flattening the tree to produce kvm slots).

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]