qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] avoid core reading with bdrv_read (qemu-io)


From: Frediano Ziglio
Subject: Re: [Qemu-devel] [PATCH] avoid core reading with bdrv_read (qemu-io)
Date: Tue, 19 Jul 2011 10:22:30 +0200

2011/7/19 Kevin Wolf <address@hidden>:
> Am 19.07.2011 09:33, schrieb Frediano Ziglio:
>> This patch apply to kevin coroutine-block branch and avoid code. It
>> fix "qcow: Use coroutines" patch. Test case:
>>
>> $ ./qemu-img create -f qcow aaa.img 1G
>> Formatting 'aaa.img', fmt=qcow size=1073741824 encryption=off
>> $ ./qemu-io aaa.img
>> qemu-io> read 1024 1024
>> Segmentation fault
>>
>> Signed-off-by: Frediano Ziglio <address@hidden>
>
> Thanks for the report. I'll update the patch, but in a slightly
> different way that matches the old code better:
>
> diff --git a/block/qcow.c b/block/qcow.c
> index 6f7973c..6447c2a 100644
> --- a/block/qcow.c
> +++ b/block/qcow.c
> @@ -573,7 +573,6 @@ static int qcow_aio_read_cb(void *opaque)
>
>     if (acb->nb_sectors == 0) {
>         /* request completed */
> -        qemu_iovec_from_buffer(acb->qiov, acb->orig_buf, acb->qiov->size);
>         return 0;
>     }
>
> @@ -647,6 +646,7 @@ static int qcow_co_readv(BlockDriverState *bs,
> int64_t sector_num,
>     qemu_co_mutex_unlock(&s->lock);
>
>     if (acb->qiov->niov > 1) {
> +        qemu_iovec_from_buffer(acb->qiov, acb->orig_buf, acb->qiov->size);
>         qemu_vfree(acb->orig_buf);
>     }
>     qemu_aio_release(acb);
>

Yes, my patch also removed some dandling pointer which I don't like
but are not a problem with current code.
In case of ret < 0 (error) your code could copy data that probably are
not initialized. I don't know if data is used in case of failure but
in case memory is shared with guest (I don't know, perhaps using
virtio) this lead to security issues. Also some memory debugger like
valgrind could not like that copy.

Frediano



reply via email to

[Prev in Thread] Current Thread [Next in Thread]