[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC v3 8/8] memory: add support for deleting HVA mappe
From: |
Igor Mammedov |
Subject: |
Re: [Qemu-devel] [RFC v3 8/8] memory: add support for deleting HVA mapped MemoryRegion |
Date: |
Wed, 8 Jul 2015 16:43:55 +0200 |
On Wed, 8 Jul 2015 12:58:55 +0300
"Michael S. Tsirkin" <address@hidden> wrote:
> On Wed, Jul 08, 2015 at 11:46:48AM +0200, Igor Mammedov wrote:
> > Although memory_region_del_subregion() removes MemoryRegion
> > from current address space, it's possible that it's still
> > in use/referenced until old address space view is destroyed.
> > That doesn't allow to unmap it from HVA region at the time
> > of memory_region_del_subregion().
> > As a solution track HVA mapped MemoryRegions in a list and
> > don't allow to map another MemoryRegion at the same address
> > until respective MemoryRegion is destroyed, delaying unmapping
> > from HVA range to the time MemoryRegion destructor is called.
> >
> > In memory hotplug terms it would mean that user should delete
> > corresponding backend along with pc-dimm device:
> > device_del dimm1
> > object_del dimm1_backend_memdev
> > after that dimm1_backend_memdev's MemoryRegion will be destroyed
> > once all accesses to it are gone and old flatview is destroyed as
> > well.
> > As result it's possible that a following "device_add pc-dimm" at
> > the same address may fail due to old mapping is still being present,
>
>
> s/is still/still/
fixed
>
> > hence add error argument to memory_region_add_subregion() API
> > so it could report error and hotplug could be cancelled gracefully.
> >
> > Signed-off-by: Igor Mammedov <address@hidden>
>
> The commit log seems a bit confusing.
> API was added in previous patch, and this one actually
> uses it.
previous patch added qemu_ram_* API utilities at exec.c
but this patch adds memory_region_* API changes that would allow
to delete HVA mapped region safely.
Is there any suggestion how to make commit message less confusing.
>
> > ---
> > hw/mem/pc-dimm.c | 6 +++++-
> > include/exec/memory.h | 2 ++
> > memory.c | 55
> > +++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files
> > changed, 62 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/mem/pc-dimm.c b/hw/mem/pc-dimm.c
> > index 8cc9118..d17c22f 100644
> > --- a/hw/mem/pc-dimm.c
> > +++ b/hw/mem/pc-dimm.c
> > @@ -95,7 +95,11 @@ void pc_dimm_memory_plug(DeviceState *dev,
> > MemoryHotplugState *hpms, goto out;
> > }
> >
> > - memory_region_add_subregion(&hpms->mr, addr - hpms->base, mr,
> > &error_abort);
> > + memory_region_add_subregion(&hpms->mr, addr - hpms->base, mr,
> > &local_err);
> > + if (local_err) {
> > + goto out;
> > + }
> > +
> > vmstate_register_ram(mr, dev);
> > numa_set_mem_node_id(addr, memory_region_size(mr), dimm->node);
> >
> > diff --git a/include/exec/memory.h b/include/exec/memory.h
> > index ce0320a..d9c53f9 100644
> > --- a/include/exec/memory.h
> > +++ b/include/exec/memory.h
> > @@ -174,6 +174,7 @@ struct MemoryRegion {
> > bool romd_mode;
> > bool ram;
> > void *rsvd_hva;
> > + bool hva_mapped;
> > bool skip_dump;
> > bool readonly; /* For RAM regions */
> > bool enabled;
> > @@ -188,6 +189,7 @@ struct MemoryRegion {
> > QTAILQ_HEAD(subregions, MemoryRegion) subregions;
> > QTAILQ_ENTRY(MemoryRegion) subregions_link;
> > QTAILQ_HEAD(coalesced_ranges, CoalescedMemoryRange) coalesced;
> > + QTAILQ_ENTRY(MemoryRegion) hva_link;
> > const char *name;
> > uint8_t dirty_log_mask;
> > unsigned ioeventfd_nb;
> > diff --git a/memory.c b/memory.c
> > index 360a5b8..e3c1b36 100644
> > --- a/memory.c
> > +++ b/memory.c
> > @@ -34,6 +34,7 @@ static unsigned memory_region_transaction_depth;
> > static bool memory_region_update_pending;
> > static bool ioeventfd_update_pending;
> > static bool global_dirty_log = false;
> > +static QemuMutex hva_lock;
> >
> > static QTAILQ_HEAD(memory_listeners, MemoryListener)
> > memory_listeners = QTAILQ_HEAD_INITIALIZER(memory_listeners);
> > @@ -1761,6 +1762,24 @@ done:
> > memory_region_transaction_commit();
> > }
> >
> > +static QTAILQ_HEAD(, MemoryRegion) hva_mapped_head =
> > + QTAILQ_HEAD_INITIALIZER(hva_mapped_head);
> > +
> > +static void memory_region_destructor_hva_ram(MemoryRegion *mr)
> > +{
> > + MemoryRegion *h, *tmp;
> > +
> > + qemu_mutex_lock(&hva_lock);
> > + qemu_ram_unmap_hva(mr->ram_addr);
> > + memory_region_destructor_ram(mr);
> > + QTAILQ_FOREACH_SAFE(h, &hva_mapped_head, hva_link, tmp) {
> > + if (mr == h) {
> > + QTAILQ_REMOVE(&hva_mapped_head, h, hva_link);
> > + }
> > + }
> > + qemu_mutex_unlock(&hva_lock);
> > +}
> > +
> > static void memory_region_add_subregion_common(MemoryRegion *mr,
> > hwaddr offset,
> > MemoryRegion
> > *subregion, @@ -1772,8 +1791,43 @@ static void
> > memory_region_add_subregion_common(MemoryRegion *mr,
> > if (subregion->ram) {
> > if (mr->rsvd_hva) {
> > + MemoryRegion *h, *tmp;
> > + Int128 e, oe;
> > +
> > + assert(!subregion->hva_mapped);
> > + qemu_mutex_lock(&hva_lock);
> > +
> > + QTAILQ_FOREACH_SAFE(h, &hva_mapped_head, hva_link,
> > tmp) {
> > + if (subregion == h) {
> > + error_setg(errp, "HVA mapped memory region
> > '%s' is not "
> > + "reusable, use a new one instead",
> > + subregion->name);
> > + qemu_mutex_unlock(&hva_lock);
> > + return;
> > + }
> > +
> > + e = int128_add(int128_make64(h->addr),
> > +
> > int128_make64(memory_region_size(h)));
> > + oe = int128_add(int128_make64(offset),
> > +
> > int128_make64(memory_region_size(subregion)));
> > + if (offset >= h->addr && int128_le(oe, e)) {
> > + MemoryRegionSection rsvd_hva;
> > + rsvd_hva = memory_region_find_hva_range(mr);
> > + error_setg(errp, "memory at 0x%" PRIx64 " is
> > still in use"
> > + "by HVA mapped region: %s",
> > +
> > rsvd_hva.offset_within_address_space + offset,
> > + h->name);
> > + qemu_mutex_unlock(&hva_lock);
> > + return;
> > + }
> > + }
> > +
> > + QTAILQ_INSERT_TAIL(&hva_mapped_head, subregion,
> > hva_link);
> > + subregion->destructor =
> > memory_region_destructor_hva_ram;
> > + subregion->hva_mapped = true;
> > qemu_ram_remap_hva(subregion->ram_addr,
> > memory_region_get_ram_ptr(mr) + subregion->addr);
> > + qemu_mutex_unlock(&hva_lock);
> > }
> > }
> >
> > @@ -2288,6 +2342,7 @@ static const TypeInfo memory_region_info = {
> > static void memory_register_types(void)
> > {
> > type_register_static(&memory_region_info);
> > + qemu_mutex_init(&hva_lock);
> > }
> >
> > type_init(memory_register_types)
> > --
> > 1.8.3.1
>
- [Qemu-devel] [RFC v3 0/8] Fix QEMU crash during memory hotplug with vhost=on, Igor Mammedov, 2015/07/08
- [Qemu-devel] [RFC v3 1/8] memory: get rid of memory_region_destructor_ram_from_ptr(), Igor Mammedov, 2015/07/08
- [Qemu-devel] [RFC v3 2/8] memory: introduce MemoryRegion container with reserved HVA range, Igor Mammedov, 2015/07/08
- [Qemu-devel] [RFC v3 3/8] pc: reserve hotpluggable memory range with memory_region_init_hva_range(), Igor Mammedov, 2015/07/08
- [Qemu-devel] [RFC v3 5/8] exec: make sure that RAMBlock descriptor won't be leaked, Igor Mammedov, 2015/07/08
- [Qemu-devel] [RFC v3 6/8] exec: add qemu_ram_unmap_hva() API for unmapping memory from HVA area, Igor Mammedov, 2015/07/08
- [Qemu-devel] [RFC v3 4/8] pc: fix QEMU crashing when more than ~50 memory hotplugged, Igor Mammedov, 2015/07/08
- [Qemu-devel] [RFC v3 8/8] memory: add support for deleting HVA mapped MemoryRegion, Igor Mammedov, 2015/07/08
- [Qemu-devel] [RFC v3 7/8] memory: extend memory_region_add_subregion() to support error reporting, Igor Mammedov, 2015/07/08
- Re: [Qemu-devel] [RFC v3 7/8] memory: extend memory_region_add_subregion() to support error reporting, Paolo Bonzini, 2015/07/08
- Re: [Qemu-devel] [RFC v3 7/8] memory: extend memory_region_add_subregion() to support error reporting, Igor Mammedov, 2015/07/08