qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 02/11] docs: Convert memory.txt to rst format


From: Alex Bennée
Subject: Re: [Qemu-devel] [PATCH 02/11] docs: Convert memory.txt to rst format
Date: Fri, 01 Feb 2019 16:31:07 +0000
User-agent: mu4e 1.0; emacs 26.1

Peter Maydell <address@hidden> writes:

> Convert the memory API documentation from plain text
> to restructured text format.
>
> This is a very minimal conversion: all I had to change
> was to mark up the ASCII art parts as Sphinx expects
> for 'literal blocks', and fix up the bulleted lists
> (Sphinx expects no leading space before the bullet, and
> wants a blank line before after any list).
>
> Signed-off-by: Peter Maydell <address@hidden>

Reviewed-by: Alex Bennée <address@hidden>

> ---
>  docs/devel/{memory.txt => memory.rst} | 128 ++++++++++++++------------
>  1 file changed, 70 insertions(+), 58 deletions(-)
>  rename docs/devel/{memory.txt => memory.rst} (85%)
>
> diff --git a/docs/devel/memory.txt b/docs/devel/memory.rst
> similarity index 85%
> rename from docs/devel/memory.txt
> rename to docs/devel/memory.rst
> index 42577e1d860..b6a4c37ea5e 100644
> --- a/docs/devel/memory.txt
> +++ b/docs/devel/memory.rst
> @@ -1,19 +1,20 @@
> +==============
>  The memory API
>  ==============
>
>  The memory API models the memory and I/O buses and controllers of a QEMU
>  machine.  It attempts to allow modelling of:
>
> - - ordinary RAM
> - - memory-mapped I/O (MMIO)
> - - memory controllers that can dynamically reroute physical memory regions
> -   to different destinations
> +- ordinary RAM
> +- memory-mapped I/O (MMIO)
> +- memory controllers that can dynamically reroute physical memory regions
> +  to different destinations
>
>  The memory model provides support for
>
> - - tracking RAM changes by the guest
> - - setting up coalesced memory for kvm
> - - setting up ioeventfd regions for kvm
> +- tracking RAM changes by the guest
> +- setting up coalesced memory for kvm
> +- setting up ioeventfd regions for kvm
>
>  Memory is modelled as an acyclic graph of MemoryRegion objects.  Sinks
>  (leaves) are RAM and MMIO regions, while other nodes represent
> @@ -98,25 +99,30 @@ ROM device memory region types), this host memory needs 
> to be
>  copied to the destination on migration. These APIs which allocate
>  the host memory for you will also register the memory so it is
>  migrated:
> - - memory_region_init_ram()
> - - memory_region_init_rom()
> - - memory_region_init_rom_device()
> +
> +- memory_region_init_ram()
> +- memory_region_init_rom()
> +- memory_region_init_rom_device()
>
>  For most devices and boards this is the correct thing. If you
>  have a special case where you need to manage the migration of
>  the backing memory yourself, you can call the functions:
> - - memory_region_init_ram_nomigrate()
> - - memory_region_init_rom_nomigrate()
> - - memory_region_init_rom_device_nomigrate()
> +
> +- memory_region_init_ram_nomigrate()
> +- memory_region_init_rom_nomigrate()
> +- memory_region_init_rom_device_nomigrate()
> +
>  which only initialize the MemoryRegion and leave handling
>  migration to the caller.
>
>  The functions:
> - - memory_region_init_resizeable_ram()
> - - memory_region_init_ram_from_file()
> - - memory_region_init_ram_from_fd()
> - - memory_region_init_ram_ptr()
> - - memory_region_init_ram_device_ptr()
> +
> +- memory_region_init_resizeable_ram()
> +- memory_region_init_ram_from_file()
> +- memory_region_init_ram_from_fd()
> +- memory_region_init_ram_ptr()
> +- memory_region_init_ram_device_ptr()
> +
>  are for special cases only, and so they do not automatically
>  register the backing memory for migration; the caller must
>  manage migration if necessary.
> @@ -218,7 +224,7 @@ For example, suppose we have a container A of size 0x8000 
> with two subregions
>  B and C. B is a container mapped at 0x2000, size 0x4000, priority 2; C is
>  an MMIO region mapped at 0x0, size 0x6000, priority 1. B currently has two
>  of its own subregions: D of size 0x1000 at offset 0 and E of size 0x1000 at
> -offset 0x2000. As a diagram:
> +offset 0x2000. As a diagram::
>
>          0      1000   2000   3000   4000   5000   6000   7000   8000
>          |------|------|------|------|------|------|------|------|
> @@ -228,8 +234,9 @@ offset 0x2000. As a diagram:
>    D:                  [DDDDD]
>    E:                                [EEEEE]
>
> -The regions that will be seen within this address range then are:
> -        [CCCCCCCCCCCC][DDDDD][CCCCC][EEEEE][CCCCC]
> +The regions that will be seen within this address range then are::
> +
> +  [CCCCCCCCCCCC][DDDDD][CCCCC][EEEEE][CCCCC]
>
>  Since B has higher priority than C, its subregions appear in the flat map
>  even where they overlap with C. In ranges where B has not mapped anything
> @@ -237,8 +244,9 @@ C's region appears.
>
>  If B had provided its own MMIO operations (ie it was not a pure container)
>  then these would be used for any addresses in its range not handled by
> -D or E, and the result would be:
> -        [CCCCCCCCCCCC][DDDDD][BBBBB][EEEEE][BBBBB]
> +D or E, and the result would be::
> +
> +  [CCCCCCCCCCCC][DDDDD][BBBBB][EEEEE][BBBBB]
>
>  Priority values are local to a container, because the priorities of two
>  regions are only compared when they are both children of the same container.
> @@ -257,6 +265,7 @@ guest accesses an address:
>
>  - all direct subregions of the root region are matched against the address, 
> in
>    descending priority order
> +
>    - if the address lies outside the region offset/size, the subregion is
>      discarded
>    - if the subregion is a leaf (RAM or MMIO), the search terminates, 
> returning
> @@ -270,36 +279,39 @@ guest accesses an address:
>      address range), then if this is a container with its own MMIO or RAM
>      backing the search terminates, returning the container itself. Otherwise
>      we continue with the next subregion in priority order
> +
>  - if none of the subregions match the address then the search terminates
>    with no match found
>
>  Example memory map
>  ------------------
>
> -system_memory: address@hidden
> - |
> - +---- lomem: address@hidden ---> #ram (0-0xdfffffff)
> - |
> - +---- himem: address@hidden ---> #ram (0xe0000000-0xffffffff)
> - |
> - +---- vga-window: address@hidden ---> #pci (0xa0000-0xbffff)
> - |      (prio 1)
> - |
> - +---- pci-hole: address@hidden ---> #pci (0xe0000000-0xffffffff)
> +::
>
> -pci (0-2^32-1)
> - |
> - +--- vga-area: address@hidden
> - |      |
> - |      +--- address@hidden  ---> #vram (0x010000-0x017fff)
> - |      |
> - |      +--- address@hidden  ---> #vram (0x020000-0x027fff)
> - |
> - +---- vram: address@hidden
> - |
> - +---- vga-mmio: address@hidden
> +  system_memory: address@hidden
> +   |
> +   +---- lomem: address@hidden ---> #ram (0-0xdfffffff)
> +   |
> +   +---- himem: address@hidden ---> #ram (0xe0000000-0xffffffff)
> +   |
> +   +---- vga-window: address@hidden ---> #pci (0xa0000-0xbffff)
> +   |      (prio 1)
> +   |
> +   +---- pci-hole: address@hidden ---> #pci (0xe0000000-0xffffffff)
>
> -ram: address@hidden
> +  pci (0-2^32-1)
> +   |
> +   +--- vga-area: address@hidden
> +   |      |
> +   |      +--- address@hidden  ---> #vram (0x010000-0x017fff)
> +   |      |
> +   |      +--- address@hidden  ---> #vram (0x020000-0x027fff)
> +   |
> +   +---- vram: address@hidden
> +   |
> +   +---- vga-mmio: address@hidden
> +
> +  ram: address@hidden
>
>  This is a (simplified) PC memory map. The 4GB RAM block is mapped into the
>  system address space via two aliases: "lomem" is a 1:1 mapping of the first
> @@ -336,16 +348,16 @@ rather than completing successfully; those devices can 
> use the
>  In addition various constraints can be supplied to control how these
>  callbacks are called:
>
> - - .valid.min_access_size, .valid.max_access_size define the access sizes
> -   (in bytes) which the device accepts; accesses outside this range will
> -   have device and bus specific behaviour (ignored, or machine check)
> - - .valid.unaligned specifies that the *device being modelled* supports
> -    unaligned accesses; if false, unaligned accesses will invoke the
> -    appropriate bus or CPU specific behaviour.
> - - .impl.min_access_size, .impl.max_access_size define the access sizes
> -   (in bytes) supported by the *implementation*; other access sizes will be
> -   emulated using the ones available.  For example a 4-byte write will be
> -   emulated using four 1-byte writes, if .impl.max_access_size = 1.
> - - .impl.unaligned specifies that the *implementation* supports unaligned
> -   accesses; if false, unaligned accesses will be emulated by two aligned
> -   accesses.
> +- .valid.min_access_size, .valid.max_access_size define the access sizes
> +  (in bytes) which the device accepts; accesses outside this range will
> +  have device and bus specific behaviour (ignored, or machine check)
> +- .valid.unaligned specifies that the *device being modelled* supports
> +  unaligned accesses; if false, unaligned accesses will invoke the
> +  appropriate bus or CPU specific behaviour.
> +- .impl.min_access_size, .impl.max_access_size define the access sizes
> +  (in bytes) supported by the *implementation*; other access sizes will be
> +  emulated using the ones available.  For example a 4-byte write will be
> +  emulated using four 1-byte writes, if .impl.max_access_size = 1.
> +- .impl.unaligned specifies that the *implementation* supports unaligned
> +  accesses; if false, unaligned accesses will be emulated by two aligned
> +  accesses.


--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]