qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Memory API


From: Jan Kiszka
Subject: Re: [Qemu-devel] [RFC] Memory API
Date: Wed, 18 May 2011 17:11:47 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2011-05-18 16:36, Avi Kivity wrote:
>> I would add another drawback:
>>
>>   - Inability to identify the origin of a region accesses and handle them
>>     differently based on the source.
>>
>>     That is at least problematic for the x86 APIC which is CPU local. Our
>>     current way do deal with it is, well, very creative and falls to
>>     dust if a guest actually tries to remap the APIC.
>>
>> However, I'm unsure if that can easily be addressed. As long as only x86
>> is affected, it's tricky to ask for a big infrastructure to handle this
>> special case. Maybe there some other use cases, don't know.
> 
> We could implement it with a per-cpu MemoryRegion, with each cpu's 
> MemoryRegion populated by a different APIC sub-region.

The tricky part is wiring this up efficiently for TCG, ie. in QEMU's
softmmu. I played with passing the issuing CPUState (or NULL for
devices) down the MMIO handler chain. Not totally beautiful as
decentralized dispatching was still required, but at least only
moderately invasive. Maybe your API allows for cleaning up the
management and dispatching part, need to rethink...

> 
>>
>>>
>>>  To fix that, I propose an new API to replace the existing one:
>>>
>>>
>>>  #ifndef MEMORY_H
>>>  #define MEMORY_H
>>>
>>>  typedef struct MemoryRegionOps MemoryRegionOps;
>>>  typedef struct MemoryRegion MemoryRegion;
>>>
>>>  typedef uint32_t (*MemoryReadFunc)(MemoryRegion *mr, target_phys_addr_t
>>>  addr);
>>>  typedef void (*MemoryWriteFunc)(MemoryRegion *mr, target_phys_addr_t addr,
>>>                                  uint32_t data);
>>>
>>>  struct MemoryRegionOps {
>>>      MemoryReadFunc readb, readw, readl;
>>>      MemoryWriteFunc writeb, writew, writel;
>>>  };
>>>
>>>  struct MemoryRegion {
>>>      const MemoryRegionOps *ops;
>>>      target_phys_addr_t size;
>>>      target_phys_addr_t addr;
>>>  };
>>>
>>>  void memory_region_init(MemoryRegion *mr,
>>>                          target_phys_addr_t size);
>>
>> What use case would this abstract region cover?
> 
> An empty container, fill it with memory_region_add_subregion().

Yeah, of course.

> 
>>
>>>  void memory_region_init_io(MemoryRegion *mr,
>>>                             const MemoryRegionOps *ops,
>>>                             target_phys_addr_t size);
>>>  void memory_region_init_ram(MemoryRegion *mr,
>>>                              target_phys_addr_t size);
>>>  void memory_region_init_ram_ptr(MemoryRegion *mr,
>>>                                  target_phys_addr_t size,
>>>                                  void *ptr);
>>>  void memory_region_destroy(MemoryRegion *mr);
>>>  void memory_region_set_offset(MemoryRegion *mr, target_phys_addr_t offset);
>>>  void memory_region_set_log(MemoryRegion *mr, bool log);
>>>  void memory_region_clear_coalescing(MemoryRegion *mr);
>>>  void memory_region_add_coalescing(MemoryRegion *mr,
>>>                                    target_phys_addr_t offset,
>>>                                    target_phys_addr_t size);
>>>
>>>  void memory_region_add_subregion(MemoryRegion *mr,
>>>                                   target_phys_addr_t offset,
>>>                                   MemoryRegion *subregion);
>>>  void memory_region_del_subregion(MemoryRegion *mr,
>>>                                   target_phys_addr_t offset,
>>>                                   MemoryRegion *subregion);
>>>
>>>  void cpu_register_memory_region(MemoryRegion *mr, target_phys_addr_t addr);
>>
>> This could create overlaps. I would suggest to reject them, so we need a
>> return code.
> 
> There is nothing we can do with a return code.  You can't fail an mmio 
> that causes overlapping physical memory map.

We must fail such requests to make progress with the API. That may
happen either on caller side or in cpu_register_memory_region itself
(hwerror). Otherwise the new API will just be a shiny new facade for on
old and still fragile building.

> 
> 
>>
>>>  void cpu_unregister_memory_region(MemoryRegion *mr);
> 
> Instead, we need cpu_unregister_memory_region() to restore any 
> previously hidden ranges.

I disagree. Both approaches, rejecting overlaps or restoring them, imply
subtle semantical changes that exiting device models have to deal with.
We can't use any of both without some review and conversion work. So
better head for the clearer and, thus, cleaner approach.

> 
>>>
>>>  #endif
>>>
>>>  The API is nested: you can define, say, a PCI BAR containing RAM and
>>>  MMIO, and give it to the PCI subsystem.  PCI can then enable/disable the
>>>  BAR and move it to different addresses without calling any callbacks;
>>>  the client code can enable or disable logging or coalescing without
>>>  caring if the BAR is mapped or not.  For example:
>>
>> Interesting feature.
>>
>>>
>>>    MemoryRegion mr, mr_mmio, mr_ram;
>>>
>>>    memory_region_init(&mr);
>>>    memory_region_init_io(&mr_mmio,&mmio_ops, 0x1000);
>>>    memory_region_init_ram(&mr_ram, 0x100000);
>>>    memory_region_add_subregion(&mr, 0,&mr_ram);
>>>    memory_region_add_subregion(&mr, 0x10000,&mr_io);
>>>    memory_region_add_coalescing(&mr_ram, 0, 0x100000);
>>>    pci_register_bar(&pci_dev, 0,&mr);
>>>
>>>  at this point the PCI subsystem knows everything about the BAR and can
>>>  enable or disable it, or move it around, without further help from the
>>>  device code.  On the other hand, the device code can change logging or
>>>  coalescing, or even change the structure of the region, without caring
>>>  about whether the region is currently registered or not.
>>>
>>>  If we can agree on the API, then I think the way forward is to implement
>>>  it in terms of the old API, change over all devices, then fold the old
>>>  API into the new one.
>>
>> There are more aspects that should be clarified before moving forward:
>>   - How to maintain memory regions internally?
> 
> Not sure what you mean by the question, but my plan was to have the 
> client responsible for allocating the objects (and later use 
> container_of() in the callbacks - note there are no void *s any longer).
> 
>>   - Get rid of wasteful PhysPageDesc at this chance?
> 
> That's the intent, but not at this chance, rather later on.

The features you expose to the users somehow have to be mapped on data
structures internally. Those need to support both the
registration/deregistration as well as the lookup efficiently. By
postponing that internal design to the point when we already switched to
facade, the risk arises that a suboptimal interface was exposed and
conversion was done in vain.

>  But I want 
> the API to be compatible with the goal so we don't have to touch all 
> devices again.

We can't perform any proper change in the area without touching all
users, some a bit more, some only minimally.

> 
>>   - How to hook into the region maintenance (CPUPhysMemoryClient,
>>     listening vs. filtering or modifying changes)? How to simplify
>>     memory clients this way?
> 
> I'd leave things as is, at least for the beginning.  CPUPhysMemoryClient 
> is global in nature, whereas MemoryRegion is local (offsets are relative 
> to the containing region).

See [1]: We really need to get rid of slot management on
CPUPhysMemoryClient side. Your API provides a perfect opportunity to
establish the infrastructure of slot tracking at a central place. We can
then switch from reporting cpu_registering_memory events to reporting
coalesced changes to slots, those slot that also the core uses. So a new
CPUPhysMemoryClient API needs to be considered in this API change as
well - or we change twice in the end.

Jan

[1] http://thread.gmane.org/gmane.comp.emulators.qemu/102893

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]