qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 4/5] disk_deadlines: add control of requests tim


From: Denis V. Lunev
Subject: Re: [Qemu-devel] [PATCH 4/5] disk_deadlines: add control of requests time expiration
Date: Tue, 8 Sep 2015 14:27:27 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0

On 09/08/2015 02:06 PM, Kevin Wolf wrote:
Am 08.09.2015 um 10:00 hat Denis V. Lunev geschrieben:
From: Raushaniya Maksudova <address@hidden>

If disk-deadlines option is enabled for a drive, one controls time
completion of this drive's requests. The method is as follows (further
assume that this option is enabled).

Every drive has its own red-black tree for keeping its requests.
Expiration time of the request is a key, cookie (as id of request) is an
appropriate node. Assume that every requests has 8 seconds to be completed.
If request was not accomplished in time for some reasons (server crash or
smth else), timer of this drive is fired and an appropriate callback
requests to stop Virtial Machine (VM).

VM remains stopped until all requests from the disk which caused VM's
stopping are completed. Furthermore, if there is another disks whose
requests are waiting to be completed, do not start VM : wait completion
of all "late" requests from all disks.

Signed-off-by: Raushaniya Maksudova <address@hidden>
Signed-off-by: Denis V. Lunev <address@hidden>
CC: Stefan Hajnoczi <address@hidden>
CC: Kevin Wolf <address@hidden>
+    disk_deadlines->expired_tree = true;
+    need_vmstop = !atomic_fetch_inc(&num_requests_vmstopped);
+    pthread_mutex_unlock(&disk_deadlines->mtx_tree);
+
+    if (need_vmstop) {
+        qemu_system_vmstop_request_prepare();
+        qemu_system_vmstop_request(RUN_STATE_PAUSED);
+    }
+}
What behaviour does this result in? If I understand correctly, this is
an indirect call of do_vm_stop(), which involves a bdrv_drain_all(). In
this case, qemu would completely block (including unresponsive monitor)
until the request can complete.

Is this what you are seeing with this patch, or why doesn't the
bdrv_drain_all() call cause such effects?

Kevin
interesting point. Yes, it flushes all requests and most likely
hangs inside waiting requests to complete. But fortunately
this happens after the switch to paused state thus
the guest becomes paused. That's why I have missed this
fact.

This (could) be considered as a problem but I have no (good)
solution at the moment. Should think a bit on.

Nice catch, though!

Den



reply via email to

[Prev in Thread] Current Thread [Next in Thread]