qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/5] hostmem-file: Add "persistent" option


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH 0/5] hostmem-file: Add "persistent" option
Date: Fri, 11 Aug 2017 17:44:55 +0100
User-agent: Mutt/1.8.3 (2017-05-23)

On Fri, Aug 11, 2017 at 01:33:00PM -0300, Eduardo Habkost wrote:
> CCing Zack Cornelius.
> 
> On Wed, Jun 14, 2017 at 05:29:55PM -0300, Eduardo Habkost wrote:
> > This series adds a new "persistent" option to
> > memory-backend-file.  The new option it will be useful if
> > somebody is sharing RAM contents on a file using share=on, but
> > don't need it to be flushed to disk when QEMU exits.
> > 
> > Internally, it will trigger a madvise(MADV_REMOVE) or
> > fallocate(FALLOC_FL_PUNCH_HOLE) call when the memory backend is
> > destroyed.
> > 
> > To make we actually trigger the new code when QEMU exits, the
> > first patch in the series ensures we destroy all user-created
> > objects when exiting QEMU.
> 
> So, before sending a new version of this, we need to clarify one
> thing: why exactly unlink()+close() wouldn't be enough to avoid
> having data unnecessarily flushed to the backing store and make
> the new option unnecessary?

If the backend file is shared between processes, unlinking
it feels bad - you're assuming no /future/ process wants to
attach to the file. Also if QEMU aborts for any reason, the
cleanup code is never going to run

> I would expect close() to not write any data unnecessarily if
> there are no remaining references to the file.  Why/when this is
> not the case?

Isn't the unlink() delayed until such time as *all* open handles
on that file are closed. If so, it seems that if 2 processes
have the file open, and one closes it, it is still valid for the
kernel to want to flush data out to the backing store if it needs
to free up working memory consumed by i/o cache.

If this wasn't the case, then one process could write 20 GB of data,
unlink + close the file, and that 20 GB would never be able to be
purge from I/O cache for as long as another process had that FD
open. That would be pretty bad denial of sevice for memory management
system.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]