qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] honor IDE_DMA_BUF_SECTORS


From: Avi Kivity
Subject: Re: [Qemu-devel] [PATCH] honor IDE_DMA_BUF_SECTORS
Date: Thu, 26 Mar 2009 14:10:20 +0200
User-agent: Thunderbird 2.0.0.21 (X11/20090320)

Stefano Stabellini wrote:
Avi Kivity wrote:

If cpu_physical_memory_map() returns NULL, then dma-helpers.c will stop collecting sg entries and submit the I/O. Tuning that will control how vectored requests are submitted.



I understand your suggestion now, something like:

---

diff --git a/dma-helpers.c b/dma-helpers.c
index 96a120c..6c43b97 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -96,6 +96,11 @@ static void dma_bdrv_cb(void *opaque, int ret)
     while (dbs->sg_cur_index < dbs->sg->nsg) {
         cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte;
         cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte;
+        if (dbs->iov.size + cur_len > DMA_LIMIT) {
+            cur_len = DMA_LIMIT - dbs->iov.size;
+            if (cur_len <= 0)
+                break;
+        }
         mem = cpu_physical_memory_map(cur_addr, &cur_len, !dbs->is_write);
         if (!mem)
             break;

---

would work for me.

However it is difficult to put that code inside cpu_physical_memory_map
since I don't have any reference to link together all the mapping
requests related to the same dma transfer.

It would be fine here, but see below.

If you problem is specifically with the bdrv_aio_rw_vector bounce buffer, then note that this is a temporary measure until vectored aio is in place, through preadv/pwritev and/or linux-aio IO_CMD_PREADV. You should either convert to that when it is merged, or implement request splitting in bdrv_aio_rw_vector.

Can you explain your problem in more detail?



My problem is that my block driver has a size limit for read and write
operations.

Then I think the place to split the requests is in your block format driver, not the generic code. If you run with one device using the limited block format driver, and the other device using another, unlimited block format driver, then the second device would be limited by the first device's limitation.

I realize your use case will probably not trigger this, but it does indicate you're limiting at the wrong layer. It places the burden on all callers of block format drivers instead of centralizing it.

When preadv/pwritev are in place I could limit the transfer size
directly in raw_aio_preadv\pwritev but I would also have to update the
iovector size field to reflect that and I think is a little bit ugly.

Just copy the iovec for each sub-request.

--
error compiling committee.c: too many arguments to function





reply via email to

[Prev in Thread] Current Thread [Next in Thread]