qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/9] major rework of drive-mirror


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH 0/9] major rework of drive-mirror
Date: Wed, 15 Jun 2016 11:06:47 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 14.06.2016 um 17:25 hat Denis V. Lunev geschrieben:
> Block commit of the active image to the backing store on a slow disk
> could never end. For example with the guest with the following loop
> inside
>     while true; do
>         dd bs=1k count=1 if=/dev/zero of=x
>     done
> running above slow storage could not complete the operation with a
> resonable amount of time:
>     virsh blockcommit rhel7 sda --active --shallow
>     virsh qemu-monitor-event
>     virsh qemu-monitor-command rhel7 \
>         '{"execute":"block-job-complete",\
>           "arguments":{"device":"drive-scsi0-0-0-0"} }'
>     virsh qemu-monitor-event
> Completion event is never received.
> 
> This problem could not be fixed easily with the current architecture. We
> should either prohibit guest writes (making dirty bitmap dirty) or switch
> to the sycnchronous scheme.
> 
> This series switches driver mirror to synch scheme. Actually we can have
> something more intelligent and switch to sync mirroring just after
> the first pass over the bitmap. Though this could be done relatively
> easily during discussion. The most difficult things are here.
> 
> The set also adds some performance improvements dealing with
> known-to-be-zero areas.

I only read the cover letter and had a quick look at the patch doing the
actual switch, so this is by far not a real review, but I have a few
general comments anway:


First of all, let's make sure we're all using the same terminology. In
past discussions about mirror modes, we distinguished active/passive and
synchronous/asynchronous.

* An active mirror mirrors requests immediately when they are made by
  the guest. A passive mirror just remembers that it needs to mirror
  something and does it whenever it wants.

* A synchronous mirror completes the guest request only after the data
  has successfully been written to both the live imaeg and the target.
  An asynchronous one can complete the guest request before the mirror
  I/O has completed.

In these terms, the currently implemented mirror is a passive
asynchronous one. If I understand correctly, what you are doing in this
series is to convert it unconditionally to an active asynchronous one.


The "unconditionally" part is my first complaint: The active mirror does
potentially a lot more I/O, so it's not clear that you want to use it.
This should be the user's choice. (We always intended to add an active
mirror sooner or later, but so far nobody needed it desperately enough.)


The second big thing is that I don't want to see new users of the
notifiers in I/O functions. Let's try if we can't add a filter
BlockDriver instead. Then we'd add an option to set the filter node-name
in the mirror QMP command so that the management tool is aware of the
node and can refer to it.

If we don't do this now, we'll have to introduce it later and can't be
sure that the management tool knows about it. This would complicate
things quite a bit because we would have to make sure that the added
node stays invisible to the management tool.


I think these two things are the big architectural questions. The rest
is hopefully more or less implementation details.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]