[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] QOMification of AXI streams

From: Andreas Färber
Subject: Re: [Qemu-devel] [RFC] QOMification of AXI streams
Date: Tue, 12 Jun 2012 03:04:58 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120421 Thunderbird/12.0

Am 12.06.2012 00:00, schrieb Benjamin Herrenschmidt:
>>>     system_memory
>>>        alias ->  pci
>>>        alias ->  ram
>>>     pci
>>>        bar1
>>>        bar2
>>>     pcibm
>>>        alias ->  pci  (prio 1)
>>>        alias ->  system_memory (prio 0)
>>> cpu_physical_memory_rw() would be implemented as
>>> memory_region_rw(system_memory, ...) while pci_dma_rw() would be
>>> implemented as memory_region_rw(pcibm, ...).  This would allo
>> different address transformations for the two accesses.
>> Yeah, this is what I'm basically thinking although I don't quite
>> understand what  'pcibm' stands for.
>> My biggest worry is that we'll end up with parallel memory API
>> implementations split between memory.c and dma.c.
> So it makes some amount of sense to use the same structure. For example,
> if a device issues accesses, those could be caught by a sibling device
> memory region... or go upstream.
> Let's just look at downstream transformation for a minute...
> We do need to be a bit careful about transformation here: I need to
> double check but I don't think we do transformation downstream today in
> a clean way and we'd have to do that. IE. On pseries for example, the
> PCI host bridge has a window in the CPU address space of [A...A+S], but

That's not quite the way we're modelling it yet as shown by Avi above,
there is no CPU address space, only a "system" address space.

The way we're modelling it today is shoving everything into a global
machine-level adress space which many devices access themselves via
get_system_memory() and get_system_io() because there's no easy way to
pass it to them except for exposing their struct and setting a field
before qdev_init_nofail().

Can't each CPUState get a MemoryRegion for its CPU address space, which
then can have subregions/aliases for the one system_memory with its
subregions for PCI host bridge etc.? Then there's no need any more to
have a cpu_physical_memory_rw(), is there?


SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

reply via email to

[Prev in Thread] Current Thread [Next in Thread]