On 09/08/2015 12:33 PM, Paolo Bonzini wrote:
On 08/09/2015 10:00, Denis V. Lunev wrote:
How the given solution works?
If disk-deadlines option is enabled for a drive, one controls time
completion
of this drive's requests. The method is as follows (further assume that
this
option is enabled).
Every drive has its own red-black tree for keeping its requests.
Expiration time of the request is a key, cookie (as id of request) is an
appropriate node. Assume that every requests has 8 seconds to be
completed.
If request was not accomplished in time for some reasons (server crash or
smth
else), timer of this drive is fired and an appropriate callback requests
to
stop Virtial Machine (VM).
VM remains stopped until all requests from the disk which caused VM's
stopping
are completed. Furthermore, if there is another disks with
'disk-deadlines=on'
whose requests are waiting to be completed, do not start VM : wait
completion
of all "late" requests from all disks.
Furthermore, all requests which caused VM stopping (or those that just
were not
completed in time) could be printed using "info disk-deadlines" qemu
monitor
option as follows:
This topic has come up several times in the past.
I agree that the current behavior is not great, but I am not sure that
timeouts are safe. For example, how is disk-deadlines=on different from
NFS soft mounts? The NFS man page says
NB: A so-called "soft" timeout can cause silent data corruption in
certain cases. As such, use the soft option only when client
responsiveness is more important than data integrity. Using NFS
over TCP or increasing the value of the retrans option may
mitigate some of the risks of using the soft option.
Note how it only says "mitigate", not solve.
Paolo
This solution is far not perfect as there is a race window for
request complete anyway. Though the amount of failures is
reduced by 2-3 orders of magnitude.
The behavior is similar not for soft mounts, which could
corrupt the data but to hard mounts which are default AFAIR.
It will not corrupt the data and should patiently wait
request complete.
Without the disk the guest is not able to serve any request and
thus keeping it running does not make serious sense.
This approach is used by Odin in production for years and
we were able to seriously reduce the amount of end-user
reclamations. We were unable to invent any reasonable
solution without guest modification/timeouts tuning.
Anyway, this code is off by default, storage agnostic, separated.
Yes, we will be able to maintain it for us out-of-tree, but...
Den