qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Para-virtualized ram-based filesystem?


From: Ritchie, Stuart
Subject: Re: [Qemu-devel] Para-virtualized ram-based filesystem?
Date: Fri, 15 Apr 2011 18:58:32 -0500
User-agent: Microsoft-MacOutlook/14.2.0.101115

On 4/15/11 2:43 PM, "Anthony Liguori" <address@hidden> wrote:

>On 04/15/2011 04:09 PM, Ritchie, Stuart wrote:
>> Hi all,
>>
>> Has anyone looked at implementing a para-virtualized ram-based
>>filesystem
>> for qemu?  Or any similar dynamic memory mapping techniques for running
>> guests?
>>
>> What I had in mind would be a convenient, zero-copy mechanism for
>>sharing
>> dynamically allocated, memory mapped files between host and guests.
>>
>> The host provides a primary memory-mapped file system (ramfs, tmpfs,
>> hugetlbfs, etc), and the guest kernel and qemu use this host fs to
>>provide
>> the illusion to guest applications that the filesystem is local.
>>
>> The guest kernel contains a new filesystem, say call it vramfs,
>> implementing the various VFS handlers for a para-virt filesystem.  These
>> handlers call out to qemu, which in turn emulates them by invoking the
>> required host system calls.
>
>You can do this with ivshmem today.  You give it a path to a shared
>memory file, and then there's a path in sysfs that you can mmap() in
>userspace in the guest.

Please correct me if I am wrong, but with ivshmem you must to manage your
world within a single, fixed size region.  I appreciate the simplicity of
mapping the whole region all in one go, but our requirements are a bit
different.  Even if you could pass multiple -device ivshmem instances,
it's still a fixed environment.  Right?

Guest applications need to manage an arbitrary number of dynamic files.
Say, a few dozen.  The files can be created, deleted, grow and shrink
arbitrarily as applications see fit.  Some are small like 2KB, others
could incrementally grow to 1GB or more.  Existing code depends on this
file-based abstraction and there is pressure against change.

Each file is created and owned by its own process; thus synchronization
within a file is not necessary.  A single guest contains a number of
processes that are creating and owning these files.

A guest may exit and restart.  When it restarts, its processes should be
able to open their files, map them in, and carry on.

It must be possible to hand control of files from guest to another, again
using zero-copy memory mapping.

Seems to me that a para-virt ram-based filesystem fits the bill here.  The
idea leverages the host fs for indexing, memory management and
synchronization.  This is otherwise what we would have to do within a
single fixed region ourselves.

The cost for all this flexibility is lots of micro-mappings.  I'm not sure
the current qemu infrastructure is designed for this.  For starters,
RAMBlocks are managed on a singly-linked list.  Probably there are lots of
other scaling issues.

How does that sound?

Cheers,
--Stuart

PS. Sorry about the corporate disclaimer, maybe folks in other Fortune
500s can give me tips on how to fix it. :-)

>
>Regards,
>
>Anthony Liguori
>
>> Handling mmap/munmap is tricky -- but this is where the magic is.  There
>> does seem to be some qemu infrastructure to dynamically map memory into
>>a
>> running system, though it may be designed for different requirements
>> (e.g., device memory).
>>
>> I currently have the resources to work on this and am looking forward to
>> contributing my work back to the community.  I would appreciate any help
>> or pointers on this effort.
>>
>> Cheers,
>> --Stuart


============================================================
The information contained in this message may be privileged
and confidential and protected from disclosure. If the reader
of this message is not the intended recipient, or an employee
or agent responsible for delivering this message to the
intended recipient, you are hereby notified that any reproduction,
dissemination or distribution of this communication is strictly
prohibited. If you have received this communication in error,
please notify us immediately by replying to the message and
deleting it from your computer. Thank you. Tellabs
============================================================




reply via email to

[Prev in Thread] Current Thread [Next in Thread]