On Mon, Jul 17, 2017 at 11:33 PM, Peter Maydell <address@hidden>
wrote:
On 14 June 2017 at 18:45, Edgar E. Iglesias <address@hidden>
wrote:
From: "Edgar E. Iglesias" <address@hidden>
Paolo suggested offline that we send a pull request for this series.
Here it is, I've run it through my testsuite + tested the LQSPI testcase
on Zynq.
----------------------------------------------------------------
mmio-exec.for-upstream
----------------------------------------------------------------
KONRAD Frederic (7):
cputlb: cleanup get_page_addr_code to use VICTIM_TLB_HIT
cputlb: move get_page_addr_code
cputlb: fix the way get_page_addr_code fills the tlb
qdev: add MemoryRegion property
introduce mmio_interface
exec: allow to get a pointer for some mmio memory region
xilinx_spips: allow mmio execution
Hi Edgar -- can you or Fred explain how this code interacts with
VM migration? The mmio-interface device creates a RAM memory
region with memory_region_init_ram_ptr(), but it doesn't call
vmstate_register_ram(). On the other hand the core migration code
will try to migrate the contents of the RAMBlock anyway, just
without a name.
It's not clear to me how this works, and it would be nice to
get it clear so that we can make any necessary fixes before the
2.10 release hits and we lose the opportunity to make any
migration-compatibility-breaking changes.
thanks
-- PMM
Hi Peter,
These temporary regions should be read-only and treated as temporary caches
AFAIU things.
I would say that they don't need to be migrated. After migration, the new
VM will recreate the ram areas from device backing.
Is there a way we can prevent migration of the RAMBlock?
Cheers,
Edgar