qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/8] arm AioContext with its own timer stuff


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC 0/8] arm AioContext with its own timer stuff
Date: Wed, 31 Jul 2013 11:02:16 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Mon, Jul 29, 2013 at 10:58:40AM +0200, Kevin Wolf wrote:
> Am 26.07.2013 um 10:43 hat Stefan Hajnoczi geschrieben:
> > On Thu, Jul 25, 2013 at 07:53:33PM +0100, Alex Bligh wrote:
> > > 
> > > 
> > > --On 25 July 2013 14:32:59 +0200 Jan Kiszka <address@hidden> wrote:
> > > 
> > > >>I would happily at a QEMUClock of each type to AioContext. They are 
> > > >>after
> > > >>all pretty lightweight.
> > > >
> > > >What's the point of adding tones of QEMUClock instances? Considering
> > > >proper abstraction, how are they different for each AioContext? Will
> > > >they run against different clock sources, start/stop at different times?
> > > >If the answer is "they have different timer list", then fix this
> > > >incorrect abstraction.
> > > 
> > > Even if I fix the abstraction, there is a question of whether it is
> > > necessary to have more than one timer list per AioContext, because
> > > the timer list is fundamentally per clock-source. I am currently
> > > just using QEMU_CLOCK_REALTIME as that's what the block drivers normally
> > > want. Will block drivers ever want timers from a different clock source?
> > 
> > block.c and block/qed.c use vm_clock because block drivers should not do
> > guest I/O while the vm is stopped.  This is especially true during live
> > migration where it's important to hand off the image file from the
> > source host to the destination host with good cache consistency.  The
> > source host is not allowed to modify the image file anymore once the
> > destination host has resumed the guest.
> > 
> > Block jobs use rt_clock because they aren't considered guest I/O.
> 
> But considering your first paragraph, why is it safe to let block jobs
> running while we're migrating? Do we really do that? It sounds unsafe to
> me.

It is not safe:

1. Block jobs are not migration-aware and it therefore does not make
   sense to run them across live migration.

2. Running block jobs may modify the image file after the destination
   host has resumed the guest.

We simply forgot to add the check.  I'll try to send a patch for this
today.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]