[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH 04/22] savevm: do_loadvm(): Always resume the VM

From: Juan Quintela
Subject: [Qemu-devel] Re: [PATCH 04/22] savevm: do_loadvm(): Always resume the VM
Date: Wed, 21 Apr 2010 10:36:29 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.1 (gnu/linux)

Luiz Capitulino <address@hidden> wrote:
> On Tue, 20 Apr 2010 23:28:23 +0200
> Juan Quintela <address@hidden> wrote:
>> Luiz Capitulino <address@hidden> wrote:
>> > do_loadvm(), which implements the 'loadvm' Monitor command, pauses
>> > the emulation to load the saved VM, however it will only resume
>> > it if the loading succeeds.
>> >
>> > In other words, if the user issues 'loadvm' and it fails, the
>> > end result will be an error message and a paused VM.
>> >
>> > This seems an undesirable side effect to me because, most of the
>> > time, if a Monitor command fails the best thing we can do is to
>> > leave the VM as it were before the command was executed.
>> >
>> > FIXME: This will try to run a potentially corrupted image, the
>> >        solution is to split load_vmstate() in two and only keep
>> >        the VM paused if qemu_loadvm_state() fails.
>> Any of the other errors in loadvm also requires you to not load the
>> state.
>  Really? Everything that happens before qemu_fopen_bdrv() seems to be
> only looking for the snapshot..

Let's see:

    bs = get_bs_snapshots();
    if (!bs) {
        error_report("No block device supports snapshots");
        return -EINVAL;

//// If we are asked to load a vm and there are no snapshots on any disk
//// ... trying to run the image look overkill

    /* Flush all IO requests so they don't interfere with the new state.  */

    QTAILQ_FOREACH(dinfo, &drives, next) {
        bs1 = dinfo->bdrv;
        if (bdrv_has_snapshot(bs1)) {

/// We found a device that has snapshots
            ret = bdrv_snapshot_goto(bs1, name);
            if (ret < 0) {
/// And don't have a snapshot with the name that we wanted
                switch(ret) {
                case -ENOTSUP:
                    error_report("%sSnapshots not supported on device '%s'",
                                 bs != bs1 ? "Warning: " : "",
                case -ENOENT:
                    error_report("%sCould not find snapshot '%s' on device 
                                 bs != bs1 ? "Warning: " : "",
                                 name, bdrv_get_device_name(bs1));
                    error_report("%sError %d while activating snapshot on '%s'",
                                 bs != bs1 ? "Warning: " : "",
                                 ret, bdrv_get_device_name(bs1));
                /* fatal on snapshot block device */
// I think that one inconditional exit with predjuice could be in order here

// Notice that bdrv_snapshot_goto() modifies the disk, name is as bad as
// you can get.  It just open the disk, opens the snapshot, increases
// its counter of users, and makes it available for use after here
// (i.e. loading state, posibly conflicting with previous running
// VM a.k.a. disk corruption.

                if (bs == bs1)
                    return 0;

// This error is as bad as it can gets :(  We have to load a vmstate,
// and the disk that should have the memory image don't have it.
// This is an error, I just put the wrong nunmber the previous time.
// Notice that this error should be very rare.

As stated, I don't think that trying to run the machine at any point
would make any sense.  Only case where it is safe to run it is if the
failure is at get_bs_snapshots(), but at that point running the machine

<something happens>
$ loadvm other_image
  Error "other_image" snapshot don't exist.

running the previous VM looks like something that should be done
explicitely.  If the error happened after that get_bs_snapshots(),
We would need a new flag to just refuse to continue.  Only valid
operations at that point are other loadvm operations, i.e. our state is
wrong one way or another.

>  My understanding is that the loading only happens in qemu_loadvm_state(),
> is this wrong?

As stated on description, don't make sense that split.  It all case,
what we need is the new flag to not allow other run operations other
than loadvm.

Later, Juan.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]