qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v10 6/9] copy-on-read: skip non-guest reads if no copy needed


From: Andrey Shinkevich
Subject: Re: [PATCH v10 6/9] copy-on-read: skip non-guest reads if no copy needed
Date: Fri, 9 Oct 2020 15:56:50 +0300
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.9.0

On 07.10.2020 22:28, Vladimir Sementsov-Ogievskiy wrote:
07.10.2020 22:01, Andrey Shinkevich wrote:

On 07.10.2020 13:06, Vladimir Sementsov-Ogievskiy wrote:
29.09.2020 15:38, Andrey Shinkevich wrote:
If the flag BDRV_REQ_PREFETCH was set, pass it further to the
COR-driver to skip unneeded reading. It can be taken into account for
the COR-algorithms optimization. That check is being made during the
block stream job by the moment.

Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
---
  block/copy-on-read.c | 14 ++++++++++----
  block/io.c           |  2 +-
  2 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/block/copy-on-read.c b/block/copy-on-read.c
index f53f7e0..5389dca 100644
--- a/block/copy-on-read.c
+++ b/block/copy-on-read.c
@@ -145,10 +145,16 @@ static int coroutine_fn cor_co_preadv_part(BlockDriverState *bs,
              }
          }
-        ret = bdrv_co_preadv_part(bs->file, offset, n, qiov, qiov_offset,
-                                  local_flags);
-        if (ret < 0) {
-            return ret;
+        if ((flags & BDRV_REQ_PREFETCH) &

BDRV_REQ_PREFETCH is documented to be only used with BDRV_REQ_COPY_ON_READ. But here BDRV_REQ_COPY_ON_READ appears intermediately. We should change documentation in block.h
in a separate patch (and probably code in bdrv_aligned_preadv())


OK, we will come here without the BDRV_REQ_PREFETCH flag set.

flag BDRV_REQ_PREFETCH should be set in stream job. Where should it be handled, I don't follow?


If we leave block/io.c unchanged in this patch, what I'm agreeing with, we'll come to the COR-driver with the hardcoded flags = 0 :

#4 0x000055a22bb480cf in cor_co_preadv_part (bs=0x55a22d593710, offset=0, bytes=524288, qiov=0x0, qiov_offset=0, flags=0) at ../block/copy-on-read.c:149 #5 0x000055a22badcb1d in bdrv_driver_preadv (bs=0x55a22d593710, offset=0, bytes=524288, qiov=0x0, qiov_offset=0, flags=0) at ../block/io.c:1129 #6 0x000055a22baddc81 in bdrv_aligned_preadv (child=0x55a22d814780, req=0x7f8c1abffce0, offset=0, bytes=524288, align=1, qiov=0x0, qiov_offset=0, flags=512) at ../block/io.c:1515 #7 0x000055a22bade59a in bdrv_co_preadv_part (child=0x55a22d814780, offset=0, bytes=524288, qiov=0x0, qiov_offset=0, flags=BDRV_REQ_PREFETCH) at ../block/io.c:1757 #8 0x000055a22bade3d2 in bdrv_co_preadv (child=0x55a22d814780, offset=0, bytes=524288, qiov=0x0, flags=BDRV_REQ_PREFETCH) at ../block/io.c:1715 #9 0x000055a22baf5d09 in blk_do_preadv (blk=0x55a22d818c00, offset=0, bytes=524288, qiov=0x0, flags=BDRV_REQ_PREFETCH) at ../block/block-backend.c:1211 #10 0x000055a22baf5d61 in blk_co_preadv (blk=0x55a22d818c00, offset=0, bytes=524288, qiov=0x0, flags=BDRV_REQ_PREFETCH) at ../block/block-backend.c:1223 #11 0x000055a22bab4eba in stream_populate (blk=0x55a22d818c00, offset=0, bytes=524288) at ../block/stream.c:50 #12 0x000055a22bab52c2 in stream_run (job=0x55a22d810a20, errp=0x55a22d810aa0) at ../block/stream.c:162 #13 0x000055a22bab79f0 in job_co_entry (opaque=0x55a22d810a20) at ../job.c:908

So, the only way for the COR-filter driver to differ between guests reads and the stream job is to check the qiov pointer for NULL and reset the flags as appropriate. This is what I am going to do in the next version.

Andrey

To differ between guest reads and the stream job ones, we would set it here by checking for the qiov NULL pointer:


diff --git a/block/copy-on-read.c b/block/copy-on-read.c
index 4e3b1c5..df2c2ab 100644
--- a/block/copy-on-read.c
+++ b/block/copy-on-read.c
@@ -144,6 +144,9 @@ static int coroutine_fn cor_co_preadv_part(BlockDriverState *bs,
                                            n, &n);
              if (ret) {
                  local_flags |= BDRV_REQ_COPY_ON_READ;
+                if (!qiov) {
+                    local_flags |= BDRV_REQ_PREFETCH;

if qiov is NULL, this means that flags must include BDRV_REQ_PREFETCH. local_flags should inherit flags I think.

+                }
              }
          }

Andrey

+            !(local_flags & BDRV_REQ_COPY_ON_READ)) {
+            /* Skip non-guest reads if no copy needed */
+        } else {
+

extra new-line ?

+            ret = bdrv_co_preadv_part(bs->file, offset, n, qiov, qiov_offset,
+                                      local_flags);
+            if (ret < 0) {
+                return ret;
+            }
          }
          offset += n;
diff --git a/block/io.c b/block/io.c
index 11df188..62b75a5 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1388,7 +1388,7 @@ static int coroutine_fn bdrv_co_do_copy_on_readv(BdrvChild *child,
              qemu_iovec_init_buf(&local_qiov, bounce_buffer, pnum);
              ret = bdrv_driver_preadv(bs, cluster_offset, pnum,
-                                     &local_qiov, 0, 0);
+                                     &local_qiov, 0, flags & BDRV_REQ_PREFETCH);

Why? In this place we want to read. We'll write back the data a few lines below. What will we write,
if underlying driver decide to do nothing because of BDRV_REQ_PREFETCH?


See my comment above please.

Anyway, BDRV_REQ_PREFETCH here is wrong. You should not pass any qiov, if you set BDRV_REQ_PREFETCH flag.

If we come to bdrv_co_do_copy_on_readv, it means that we have COPY_ON_READ flag. And therefore, we will handle PREFETCH and COPY_ON_READ here in generic layer. And therefore, we shouldn't pass them to driver.

On the contrary, if we have PREFETCH flag in bdrv_co_aligned_preadv, but don't have COPY_ON_READ in the same time, this means that we must pass PREFETCH flag to the driver if it supports it. And do nothing if driver
doesn't support PREFETCH. That's how I see it.


              if (ret < 0) {
                  goto err;
              }








reply via email to

[Prev in Thread] Current Thread [Next in Thread]