qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/1] Support Archipelago as a QEMU block backend


From: Vangelis Koukis
Subject: Re: [Qemu-devel] [PATCH 0/1] Support Archipelago as a QEMU block backend
Date: Fri, 30 May 2014 16:08:32 +0300
User-agent: Mutt/1.5.23 (2014-03-12)

On Fri, May 30, 2014 at 11:12:55am +0200, Kevin Wolf wrote:
> Am 29.05.2014 um 13:14 hat Chrysostomos Nanakos geschrieben:
> > Hello team,
> >
> > this is a patch implementing support for a new storage layer,
> > Archipelago [1][2].
> >
> > We've been using Archipelago in our IaaS public cloud production
> > environment for over a year now, along with Google Ganeti [3]
> > and Ceph [4]. We are currently moving from accessing Archipelago
> > through a kernel block driver to accessing it directly through QEMU,
> > using this patch.
> >
> > Archipelago, already supports different storage backends such as
> > NFS, Ceph's RADOS and has initial support for Gluster
> > (with improvements from the Gluster community coming soon [5]).
> 
> I'm wondering, what is the advantage of using Archipelago in order to
> access NFS, Ceph or Gluster volumes when qemu already has support for
> all of them?
> 
> Kevin


Hello Kevin,

The point of using Archipelago is not as an access layer for NFS, Ceph,
or Gluster. Yes QEMU can access NFS or Ceph or Gluster directly:
For NFS a VM disk would be a file on NFS, for Ceph a VM disk would be
an rbd image on RADOS, for Gluster a VM disk would be a file on GlusterFS.

Archipelago is an abstraction layer to implement VM disks as a
collection of fixed-size named blocks, with support for thin clones and
snapshots, independently of where the named blocks are actually stored.
It keeps track (maps) of every VM disk and the blocks it comprises, and
performs snapshots and clones by manipulating maps and having them point
to the same underlying blocks.

The implementation of this functionality is independent from where the
underlying blocks are actually stored. Think of the following scenario:
You start with block storage on the NFS backend, where blocks are
individual files. You create your VMs, snapshot them, clone them back
into new VMs, and you can live-migrate them on any node with access to
this NFS share. At some point you decide to move to Ceph. You sync
your blocks from NFS to individual objects on Ceph's RADOS, and you
can use your VMs, snapshots and clones, as you did before.

So this is not a driver to access NFS through Archipelago. It's a driver
to access Archipelago resources, which happen to be stored on one of
its backends, NFS, Ceph, or GlusterFS, or any other store for which
there may be an Archipelago backend driver in the future, or a
combination of the above.

In a nutshell, we're trying to decouple the storage logic (clones,
snapshots, deduplication) from the actual data store. If you're interested
to know more, you can check out the latest presentations from SC'13 [1]
and LinuxCon [2], or take a look at the June issue of ;login: [3],
where we have an article on our current use of Archipelago over Ceph in
production, and describe why we made it and what we gain from it.

Regards,
Vangelis.

[1] https://pithos.okeanos.grnet.gr/public/6SCbXPVULEaOIaWe69uYG4
[2] http://youtu.be/ruzo36xdDFo
[3] https://www.usenix.org/publications/login/june-2014-vol-39-no-3

-- 
Vangelis Koukis
address@hidden
OpenPGP public key ID:
pub  1024D/1D038E97 2003-07-13 Vangelis Koukis <address@hidden>
     Key fingerprint = C5CD E02E 2C78 7C10 8A00  53D8 FBFC 3799 1D03 8E97

Only those who will risk going too far
can possibly find out how far one can go.
        -- T.S. Eliot

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]