qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1 22/22] petalogix-ml605: Make the LMB visible


From: Andreas Färber
Subject: Re: [Qemu-devel] [PATCH v1 22/22] petalogix-ml605: Make the LMB visible only to the CPU
Date: Mon, 16 Dec 2013 15:03:06 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0

Am 16.12.2013 14:29, schrieb Peter Maydell:
> On 16 December 2013 12:46, Andreas Färber <address@hidden> wrote:
>> Thanks for this series. I've been on vacation so couldn't review the
>> previous RFC yet... I'm not entirely happy with the way this is pushing
>> work to the machines here and wonder if we can simplify that some more:
>>
>> For one, I don't like the allocation of AddressSpace and MemoryRegion at
>> machine level. Would it be possible to enforce allocating a per-CPU
>> AddressSpace and MemoryRegion at cpu.c level, ideally as embedded value
>> rather than pointer field? Otherwise CPU hot-add is going to get rather
>> complicated and error-prone.
> 
> This seems like a good place to stick my oar in about how I
> think this should work in the long term...
> 
> My view is that AddressSpace and/or MemoryRegion pointers
> (links?) should be how we wire up the addressing on machine
> models, in an analogous manner to the way we wire up IRQs.
> So to take A9MPCore as an example:
> 
>  * each individual ARMCPU has an AddressSpace * property
>  * the 'a9mpcore' device should create those ARMCPU objects,
>    and also the AddressSpaces to pass to them
>  * the AddressSpace for each core is different, because it
>    has the private peripherals for that CPU only (this
>    allows us to get rid of all the shim memory regions which
>    look up the CPU via current_cpu->cpu_index)
>  * each core's AddressSpace has as a 'background region'
>    the single AddressSpace which the board and/or SoC model
>    has passed to the a9mpcore device
>  * if there's a separate SoC device object from the board
>    model, then again the AddressSpace the SoC device passes
>    to a9mpcore is the result of the SoC mapping the various
>    SoC-internal devices into an AddressSpace it got passed
>    by the board
>  * if the SoC has a DMA engine of some kind then the DMA
>    engine should also be passed an appropriate AddressSpace
>    [and we thus automatically correctly model the way the
>    hardware DMA engine can't see the per-CPU peripherals]
> 
> You'll notice that this essentially gets rid of the "system
> memory" idea...

If you leave aside the per-CPU aspect, you could start preparing such
code today, just no one so far did. ;)
Seriously, SysBus maps to system memory region, but you can access the
to-be-mapped region via SysBus API and map it to some private, e.g.
per-SoC, manually and map that to system MR for the time being.

> I don't have a strong opinion about the exact details of who
> is allocating what at what level, but I do think we need to
> move towards an idea of handing the consumer of an address
> space be passed an appropriate AS/MR which is constructed
> by the same thing that creates that consumer.

While I concur with the possibility of a "cascaded" setup of container
MemoryRegions, let's not forget that apart from SoC/MPCore parent
realization one important thing that creates devices is device_add
(user), so we are in need of a mechanism that is generic enough to not
require per-board and per-bus implementations in order to keep today's
generic functionality working. Therefore my request for some more
self-containment while remaining accessible for advanced tweaking. For a
PCIDevice we can have the PHB take care of handling mapping the bars,
whereas the CPUState seems to grow a root memory space, so if not the
CPU, who becomes the owner of that root memory then wrt dynamic creation
or destruction? The ICC bus seems little suited to me and maintaining
15(?) implementations doesn't sound so thrilling to me. HTE.

> I'm also not entirely clear on which points in this API
> should be dealing with MemoryRegions and which with
> AddressSpaces. Perhaps the CPU object should create its
> AddressSpace internally and the thing it's passed as a
> property should be a MemoryRegion * ?

And yes, I'm not aware of the exact differences between AddressSpace and
MemoryRegion, so take my terminology with a grain of salt. :)

Regards,
Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg



reply via email to

[Prev in Thread] Current Thread [Next in Thread]