qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration
Date: Fri, 26 Sep 2014 12:31:12 +1000
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, Sep 25, 2014 at 08:06:40PM +1000, Alexey Kardashevskiy wrote:
> On 09/25/2014 07:43 PM, Alexander Graf wrote:
> > 
> > 
> > On 25.09.14 09:02, Alexey Kardashevskiy wrote:
> >> The only case when sPAPR NVRAM migrates now is if is backed by a file and
> >> copy-storage migration is performed.
> >>
> >> This enables RAM copy of NVRAM even if NVRAM is backed by a file.
> >>
> >> This defines a VMSTATE descriptor for NVRAM device so the memory copy
> >> of NVRAM can migrate and be written to a backing file on the destination
> >> if one is provided.
> >>
> >> Signed-off-by: Alexey Kardashevskiy <address@hidden>
> >> ---
> >>  hw/nvram/spapr_nvram.c | 68 
> >> +++++++++++++++++++++++++++++++++++++++++++-------
> >>  1 file changed, 59 insertions(+), 9 deletions(-)
> >>
> >> diff --git a/hw/nvram/spapr_nvram.c b/hw/nvram/spapr_nvram.c
> >> index 6a72ef4..254009e 100644
> >> --- a/hw/nvram/spapr_nvram.c
> >> +++ b/hw/nvram/spapr_nvram.c
> >> @@ -76,15 +76,20 @@ static void rtas_nvram_fetch(PowerPCCPU *cpu, 
> >> sPAPREnvironment *spapr,
> >>          return;
> >>      }
> >>  
> >> +    assert(nvram->buf);
> >> +
> >>      membuf = cpu_physical_memory_map(buffer, &len, 1);
> >> +
> >> +    alen = len;
> >>      if (nvram->drive) {
> >>          alen = bdrv_pread(nvram->drive, offset, membuf, len);
> >> +        if (alen > 0) {
> >> +            memcpy(nvram->buf + offset, membuf, alen);
> > 
> > Why?
> 
> This way I do not need pre_save hook and I keep the buf in sync with the
> file. If I implement pre_save, then buf will serve 2 purposes - it is
> either NVRAM itself (if there is no backing file, exists during guest's
> lifetime) or it is a migration copy (exists between pre_save and post_load
> and then it is disposed). Two quite different uses of the same thing
> confuse me. But - I do not mind doing it your way, no big deal,
> should I?

This doesn't seem quite right to me.  I don't see anything that pulls
in the whole of the nvram contents at initialization, so it looks like
the buffer will only be in sync with the driver for the portions that
are either read or written by the guest.  Then, if you migrate while
not all of the memory copy is in sync, you could clobber the
out-of-sync parts of the disk copy as well.

Instead, I think you need to suck in the whole of the contents during
init, then all reads can just be supplied from the memory buffer, and
you'll only need to access the backing disk for stores.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: pgppEHOsGJC3N.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]