[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Adding a persistent writeback cache to qemu

From: Sage Weil
Subject: Re: [Qemu-devel] Adding a persistent writeback cache to qemu
Date: Thu, 20 Jun 2013 08:58:19 -0700 (PDT)
User-agent: Alpine 2.00 (DEB 1167 2008-08-23)

On Thu, 20 Jun 2013, Stefan Hajnoczi wrote:
> > The concrete problem here is that flashcache/dm-cache/bcache don't
> > work with the rbd (librbd) driver, as flashcache/dm-cache/bcache
> > cache access to block devices (in the host layer), and with rbd
> > (for instance) there is no access to a block device at all. block/rbd.c
> > simply calls librbd which calls librados etc.
> > 
> > So the context switches etc. I am avoiding are the ones that would
> > be introduced by using kernel rbd devices rather than librbd.
> I understand the limitations with kernel block devices - their
> setup/teardown is an extra step outside QEMU and privileges need to be
> managed.  That basically means you need to use a management tool like
> libvirt to make it usable.
> But I don't understand the performance angle here.  Do you have profiles
> that show kernel rbd is a bottleneck due to context switching?
> We use the kernel page cache for -drive file=test.img,cache=writeback
> and no one has suggested reimplementing the page cache inside QEMU for
> better performance.
> Also, how do you want to manage QEMU page cache with multiple guests
> running?  They are independent and know nothing about each other.  Their
> process memory consumption will be bloated and the kernel memory
> management will end up having to sort out who gets to stay in physical
> memory.
> You can see I'm skeptical of this and think it's premature optimization,
> but if there's really a case for it with performance profiles then I
> guess it would be necessary.  But we should definitely get feedback from
> the Ceph folks too.
> I'd like to hear from Ceph folks what their position on kernel rbd vs
> librados is.  Why one do they recommend for QEMU guests and what are the
> pros/cons?

I agree that a flashcache/bcache-like persistent cache would be a big win 
for qemu + rbd users.  

There are few important issues with librbd vs kernel rbd:

 * librbd tends to get new features more quickly that the kernel rbd 
   (although now that layering has landed in 3.10 this will be less 
   painful than it was).

 * Using kernel rbd means users need bleeding edge kernels, a non-starter 
   for many orgs that are still running things like RHEL.  Bug fixes are 
   difficult to roll out, etc.

 * librbd has an in-memory cache that behaves similar to an HDD's cache 
   (e.g., it forces writeback on flush).  This improves performance 
   significantly for many workloads.  Of course, having a bcache-like 
   layer mitigates this..

I'm not really sure what the best path forward is.  Putting the 
functionality in qemu would benefit lots of other storage backends, 
putting it in librbd would capture various other librbd users (xen, tgt, 
and future users like hyper-v), and using new kernels works today but 
creates a lot of friction for operations.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]