qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH v9 4/6] hw/ppc/spapr.c: migrate pending_dimm_unplugs


From: Daniel Henrique Barboza
Subject: [Qemu-devel] [PATCH v9 4/6] hw/ppc/spapr.c: migrate pending_dimm_unplugs of spapr state
Date: Fri, 5 May 2017 17:47:44 -0300

To allow for a DIMM unplug event to resume its work if a migration
occurs in the middle of it, this patch migrates the non-empty
pending_dimm_unplugs QTAILQ that stores the DIMM information
that the spapr_lmb_release() callback uses.

It was considered an apprach where the DIMM states would be restored
on the post-_load after a migration. The problem is that there is
no way of knowing, from the sPAPRMachineState, if a given DIMM is going
through an unplug process and the callback needs the updated DIMM State.

We could migrate a flag indicating that there is an unplug event going
on for a certain DIMM, fetching this information from the start of the
spapr_del_lmbs call. But this would also require a scan on post_load to
figure out how many nr_lmbs are left. At this point we can just
migrate the nr_lmbs information as well, given that it is being calculated
at spapr_del_lmbs already, and spare a scanning/discovery in the
post-load. All that we need is inside the sPAPRDIMMState structure
that is added to the pending_dimm_unplugs queue at the start of the
spapr_del_lmbs, so it's convenient to just migrated this queue it if it's
not empty.

Signed-off-by: Daniel Henrique Barboza <address@hidden>
---
 hw/ppc/spapr.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index e190eb9..30f0b7b 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1437,6 +1437,36 @@ static bool version_before_3(void *opaque, int 
version_id)
     return version_id < 3;
 }
 
+static bool spapr_pending_dimm_unplugs_needed(void *opaque)
+{
+    sPAPRMachineState *spapr = (sPAPRMachineState *)opaque;
+    return !QTAILQ_EMPTY(&spapr->pending_dimm_unplugs);
+}
+
+static const VMStateDescription vmstate_spapr_dimmstate = {
+    .name = "spapr_dimm_state",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT64(addr, sPAPRDIMMState),
+        VMSTATE_UINT32(nr_lmbs, sPAPRDIMMState),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static const VMStateDescription vmstate_spapr_pending_dimm_unplugs = {
+    .name = "spapr_pending_dimm_unplugs",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .needed = spapr_pending_dimm_unplugs_needed,
+    .fields = (VMStateField[]) {
+        VMSTATE_QTAILQ_V(pending_dimm_unplugs, sPAPRMachineState, 1,
+                         vmstate_spapr_dimmstate, sPAPRDIMMState,
+                         next),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 static bool spapr_ov5_cas_needed(void *opaque)
 {
     sPAPRMachineState *spapr = opaque;
@@ -1535,6 +1565,7 @@ static const VMStateDescription vmstate_spapr = {
     .subsections = (const VMStateDescription*[]) {
         &vmstate_spapr_ov5_cas,
         &vmstate_spapr_patb_entry,
+        &vmstate_spapr_pending_dimm_unplugs,
         NULL
     }
 };
-- 
2.9.3




reply via email to

[Prev in Thread] Current Thread [Next in Thread]