qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 05/10] xen: add block device backend driver.


From: Christoph Hellwig
Subject: Re: [Qemu-devel] [PATCH 05/10] xen: add block device backend driver.
Date: Thu, 2 Apr 2009 19:02:09 +0200
User-agent: Mutt/1.3.28i

On Wed, Apr 01, 2009 at 11:39:37PM +0200, Gerd Hoffmann wrote:
> +static void inline blkif_get_x86_32_req(blkif_request_t *dst, 
> blkif_x86_32_request_t *src)
> +{

> +static void inline blkif_get_x86_64_req(blkif_request_t *dst, 
> blkif_x86_64_request_t *src)
> +{

I think you'd be better of moving them to the .c file as normal static
function and leave the inlining decisions to the compiler.

> +
> +/*
> + *  FIXME: the code is designed to handle multiple outstanding
> + *         requests, which isn't used right now.  Plan is to
> + *         switch over to the aio block functions once they got
> + *         vector support.
> + */

We already have bdrv_aio_readv/writev which currently linearize the
buffer underneath.  Hopefully Anthony will have commited the patch to
implement the real one while I'm writing this, too :)

After those patches bdrv_aio_read/write will be gone so this code won't
compile anymore, too.

> +static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
> +{
> +    struct XenBlkDev *blkdev = ioreq->blkdev;
> +    int i, len = 0;
> +    off_t pos;
> +
> +    if (-1 == ioreq_map(ioreq))
> +     goto err;
> +
> +    ioreq->aio_inflight++;
> +    if (ioreq->presync)
> +     bdrv_flush(blkdev->bs); /* FIXME: aio_flush() ??? */
> +
> +    switch (ioreq->req.operation) {
> +    case BLKIF_OP_READ:
> +     pos = ioreq->start;
> +     for (i = 0; i < ioreq->vecs; i++) {
> +            ioreq->aio_inflight++;
> +            bdrv_aio_read(blkdev->bs, pos / BLOCK_SIZE,
> +                          ioreq->vec[i].iov_base,
> +                          ioreq->vec[i].iov_len / BLOCK_SIZE,
> +                          qemu_aio_complete, ioreq);
> +         len += ioreq->vec[i].iov_len;
> +         pos += ioreq->vec[i].iov_len;
> +     }

hdrv_flush doesn't actually empty the aio queues but only issues
a fsync.  So we could still re-order requeuests around the barrier
with this implementation.  I will soon submit a real block-layer level
barrier implementation that just allows to flag a bdrv_aio_read/write
request as barrier and deal with this under the hood.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]