qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL v1 0/7] MMIO Exec pull request


From: KONRAD Frederic
Subject: Re: [Qemu-devel] [PULL v1 0/7] MMIO Exec pull request
Date: Wed, 19 Jul 2017 18:22:39 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1



On 07/19/2017 02:29 PM, Dr. David Alan Gilbert wrote:
* KONRAD Frederic (address@hidden) wrote:


On 07/17/2017 07:27 PM, Edgar E. Iglesias wrote:
On Mon, Jul 17, 2017 at 11:33 PM, Peter Maydell <address@hidden>
wrote:

On 14 June 2017 at 18:45, Edgar E. Iglesias <address@hidden>
wrote:
From: "Edgar E. Iglesias" <address@hidden>
Paolo suggested offline that we send a pull request for this series.
Here it is, I've run it through my testsuite + tested the LQSPI testcase
on Zynq.

----------------------------------------------------------------
mmio-exec.for-upstream

----------------------------------------------------------------
KONRAD Frederic (7):
        cputlb: cleanup get_page_addr_code to use VICTIM_TLB_HIT
        cputlb: move get_page_addr_code
        cputlb: fix the way get_page_addr_code fills the tlb
        qdev: add MemoryRegion property
        introduce mmio_interface
        exec: allow to get a pointer for some mmio memory region
        xilinx_spips: allow mmio execution

Hi Edgar -- can you or Fred explain how this code interacts with
VM migration? The mmio-interface device creates a RAM memory
region with memory_region_init_ram_ptr(), but it doesn't call
vmstate_register_ram(). On the other hand the core migration code
will try to migrate the contents of the RAMBlock anyway, just
without a name.

It's not clear to me how this works, and it would be nice to
get it clear so that we can make any necessary fixes before the
2.10 release hits and we lose the opportunity to make any
migration-compatibility-breaking changes.

thanks
-- PMM


Hi Peter,

These temporary regions should be read-only and treated as temporary caches
AFAIU things.
I would say that they don't need to be migrated. After migration, the new
VM will recreate the ram areas from device backing.

Is there a way we can prevent migration of the RAMBlock?

Cheers,
Edgar


Hi All,

Yes Edgar is right, they don't need to be migrated (as the old
Xilinx SPI cache).
And it will be the case for all the other stuff using this as
well.

Maybe we can simply drop the region before the migration?

Dont forget the VM is still running during migration, so you
can't drop anything that's needed for it to be able to execute.

Dave

Scary.. Yes your right. I still think it doesn't need to be
migrated.. But how..

BTW taking a look at this I found the comment below which seems
to implies that the content won't be migrated?

/** * memory_region_init_ram_ptr: Initialize RAM memory region from a * user-provided pointer. Accesses into the * region will modify memory directly. * * @mr: the #MemoryRegion to be initialized. * @owner: the object that tracks the region's reference count * @name: Region name, becomes part of RAMBlock name used in migration stream * must be unique within any device * @size: size of the region. * @ptr: memory to be mapped; must contain at least @size bytes. * * Note that this function does not do anything to cause the data in the * RAM memory region to be migrated; that is the responsibility of the caller.
 */

And I guess we have the same issue when we try to unplug a device
during migration?

Fred


Fred
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]