qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 3/7] support UFFD write fault processing in ram_save_itera


From: Andrey Gruzdev
Subject: Re: [PATCH v3 3/7] support UFFD write fault processing in ram_save_iterate()
Date: Fri, 20 Nov 2020 13:44:53 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0

On 19.11.2020 21:25, Peter Xu wrote:
On Thu, Nov 19, 2020 at 03:59:36PM +0300, Andrey Gruzdev via wrote:

[...]

+/**
+ * ram_find_block_by_host_address: find RAM block containing host page
+ *
+ * Returns true if RAM block is found and pss->block/page are
+ * pointing to the given host page, false in case of an error
+ *
+ * @rs: current RAM state
+ * @pss: page-search-status structure
+ */
+static bool ram_find_block_by_host_address(RAMState *rs, PageSearchStatus *pss,
+        hwaddr page_address)
+{
+    bool found = false;
+
+    pss->block = rs->last_seen_block;
+    do {
+        if (page_address >= (hwaddr) pss->block->host &&
+            (page_address + TARGET_PAGE_SIZE) <=
+                    ((hwaddr) pss->block->host + pss->block->used_length)) {
+            pss->page = (unsigned long)
+                    ((page_address - (hwaddr) pss->block->host) >> 
TARGET_PAGE_BITS);
+            found = true;
+            break;
+        }
+
+        pss->block = QLIST_NEXT_RCU(pss->block, next);
+        if (!pss->block) {
+            /* Hit the end of the list */
+            pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
+        }
+    } while (pss->block != rs->last_seen_block);
+
+    rs->last_seen_block = pss->block;
+    /*
+     * Since we are in the same loop with ram_find_and_save_block(),
+     * need to reset pss->complete_round after switching to
+     * other block/page in pss.
+     */
+    pss->complete_round = false;
+
+    return found;
+}

I forgot whether Denis and I have discussed this, but I'll try anyways... do
you think we can avoid touching PageSearchStatus at all?

PageSearchStatus is used to track a single migration iteration for precopy, so
that we scan from the 1st ramblock until the last one.  Then we finish one
iteration.


Yes, my first idea also was to separate normal iteration from write-fault page source completely and leave pss for normal scan.. But, the other idea is to keep some locality in respect to last write fault. I mean it seems to be more optimal to re-start normal scan on the page that is next to faulting one. In this case we can save and un-protect
the neighborhood faster and prevent many write faults.

Snapshot is really something, imho, that can easily leverage this structure
without touching it - basically we want to do two things:

   - Do the 1st iteration of precopy (when ram_bulk_stage==true), and do that
     only.  We never need the 2nd, 3rd, ... iterations because we're 
snapshoting.

   - Leverage the postcopy queue mechanism so that when some page got written,
     queue that page.  We should have this queue higher priority than the
     precopy scanning mentioned above.

As long as we follow above rules, then after the above 1st round precopy, we're
simply done...  If that works, the whole logic of precopy and PageSearchStatus
does not need to be touched, iiuc.

[...]


It's quite good alternative and I thought about using postcopy page queue, but this implementation won't consider the locality of writes..

What do you think?

@@ -2086,7 +2191,8 @@ static void ram_state_reset(RAMState *rs)
      rs->last_sent_block = NULL;
      rs->last_page = 0;
      rs->last_version = ram_list.version;
-    rs->ram_bulk_stage = true;
+    rs->ram_wt_enabled = migrate_track_writes_ram();

Maybe we don't need ram_wt_enabled, but just call migrate_track_writes_ram()
anywhere needed (actually, only in get_fault_page, once).

Thanks,


Yes, think you are right, we can avoid this additional field.

Thanks,

--
Andrey Gruzdev, Principal Engineer
Virtuozzo GmbH  +7-903-247-6397
                virtuzzo.com



reply via email to

[Prev in Thread] Current Thread [Next in Thread]