qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] icount: make dma reads deterministic


From: Pavel Dovgalyuk
Subject: [PATCH] icount: make dma reads deterministic
Date: Mon, 02 Mar 2020 15:59:50 +0300
User-agent: StGit/0.17.1-dirty

Windows guest sometimes makes DMA requests with overlapping
target addresses. This leads to the following structure of iov for
the block driver:

addr size1
addr size2
addr size3

It means that three adjacent disk blocks should be read into the same
memory buffer. Windows does not expects anything from these bytes
(should it be data from the first block, or the last one, or some mix),
but uses them somehow. It leads to non-determinism of the guest execution,
because block driver does not preserve any order of reading.

This situation was discusses in the mailing list at least twice:
https://lists.gnu.org/archive/html/qemu-devel/2010-09/msg01996.html
https://lists.gnu.org/archive/html/qemu-devel/2020-02/msg05185.html

This patch makes such disk reads deterministic in icount mode.
It skips SG parts that were already affected by prior reads
within the same request. Parts that are non identical, but are just
overlapped, are trimmed.

Examples for different SG part sequences:

1)
A1 1000
A1 1000
->
A1 1000

2)
A1 1000
A2 1000
A1 1000
A3 1000
->
Two requests with different offsets, because second A1/1000 should be skipped.
A1 1000
A2 1000
--
A3 1000

3)
A1 800
A2 1000
A1 1000
->
First 800 bytes of third SG are skipped.
A1 800
A2 1000
--
A1+800 800

Signed-off-by: Pavel Dovgalyuk <address@hidden>
---
 dma-helpers.c |   57 +++++++++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 53 insertions(+), 4 deletions(-)

diff --git a/dma-helpers.c b/dma-helpers.c
index e8a26e81e1..d71512f707 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -13,6 +13,7 @@
 #include "trace-root.h"
 #include "qemu/thread.h"
 #include "qemu/main-loop.h"
+#include "sysemu/cpus.h"
 
 /* #define DEBUG_IOMMU */
 
@@ -139,17 +140,65 @@ static void dma_blk_cb(void *opaque, int ret)
     dma_blk_unmap(dbs);
 
     while (dbs->sg_cur_index < dbs->sg->nsg) {
+        bool skip = false;
         cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte;
         cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte;
-        mem = dma_memory_map(dbs->sg->as, cur_addr, &cur_len, dbs->dir);
-        if (!mem)
-            break;
-        qemu_iovec_add(&dbs->iov, mem, cur_len);
+
+        /*
+         * Make reads deterministic in icount mode.
+         * Windows sometimes issues disk read requests with
+         * overlapping SGs. It leads to non-determinism, because
+         * resulting buffer contents may be mixed from several
+         * sectors.
+         * This code crops SGs that were already read in this request.
+         */
+        if (use_icount
+            && dbs->dir == DMA_DIRECTION_FROM_DEVICE) {
+            int i;
+            for (i = 0 ; i < dbs->sg_cur_index ; ++i) {
+                if (dbs->sg->sg[i].base <= cur_addr
+                    && dbs->sg->sg[i].base + dbs->sg->sg[i].len > cur_addr) {
+                    cur_len = MIN(cur_len,
+                        dbs->sg->sg[i].base + dbs->sg->sg[i].len - cur_addr);
+                    skip = true;
+                    break;
+                } else if (cur_addr <= dbs->sg->sg[i].base
+                    && cur_addr + cur_len > dbs->sg->sg[i].base) {
+                    cur_len = dbs->sg->sg[i].base - cur_addr;
+                    break;
+                }
+            }
+        }
+
+        assert(cur_len);
+        if (!skip) {
+            mem = dma_memory_map(dbs->sg->as, cur_addr, &cur_len, dbs->dir);
+            if (!mem)
+                break;
+            qemu_iovec_add(&dbs->iov, mem, cur_len);
+        } else {
+            if (dbs->iov.size != 0) {
+                break;
+            }
+            /* skip this SG */
+            dbs->offset += cur_len;
+        }
+
         dbs->sg_cur_byte += cur_len;
         if (dbs->sg_cur_byte == dbs->sg->sg[dbs->sg_cur_index].len) {
             dbs->sg_cur_byte = 0;
             ++dbs->sg_cur_index;
         }
+
+        /*
+         * All remaining SGs were skipped.
+         * This is not reschedule case, because we already
+         * performed the reads, and the last SGs were skipped.
+         */
+        if (dbs->sg_cur_index == dbs->sg->nsg && dbs->iov.size == 0) {
+            dma_complete(dbs, ret);
+            return;
+        }
     }
 
     if (dbs->iov.size == 0) {




reply via email to

[Prev in Thread] Current Thread [Next in Thread]