[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 1/1] ceph/rbd block driver for qemu-kvm

From: Avi Kivity
Subject: Re: [Qemu-devel] [RFC PATCH 1/1] ceph/rbd block driver for qemu-kvm
Date: Tue, 25 May 2010 16:36:33 +0300
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20100330 Fedora/3.0.4-1.fc12 Thunderbird/3.0.4

On 05/25/2010 04:29 PM, Anthony Liguori wrote:
The current situation is that those block format drivers only exist in qemu.git or as patches. Surely that's even more unhappiness.

Confusion could be mitigated:

  $ qemu -module my-fancy-block-format-driver.so
my-fancy-block-format-driver.so does not support this version of qemu (0.19.2). Please contact address@hidden

The question is how many such block format drivers we expect. We now have two in the pipeline (ceph, sheepdog), it's reasonable to assume we'll want an lvm2 driver and btrfs driver. This is an area with a lot of activity and a relatively simply interface.

If we expose a simple interface, I'm all for it. But BlockDriver is not simple and things like the snapshoting API need love.

Of course, there's certainly a question of why we're solving this in qemu at all. Wouldn't it be more appropriate to either (1) implement a kernel module for ceph/sheepdog if performance matters

We'd need a kernel-level generic snapshot API for this eventually.

or (2) implement BUSE to complement FUSE and CUSE to enable proper userspace block devices.

Likely slow due do lots of copying.  Also needs a snapshot API.

(ABUSE was proposed a while ago by Zach).

If you want to use a block device within qemu, you almost certainly want to be able to manipulate it on the host using standard tools (like mount and parted) so it stands to reason that addressing this in the kernel makes more sense.

qemu-nbd also allows this.

This reasoning also applies to qcow2, btw.

error compiling committee.c: too many arguments to function

reply via email to

[Prev in Thread] Current Thread [Next in Thread]