[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 0/9] major rework of drive-mirror

From: Denis V. Lunev
Subject: Re: [Qemu-block] [PATCH 0/9] major rework of drive-mirror
Date: Wed, 15 Jun 2016 12:34:13 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1

On 06/15/2016 12:06 PM, Kevin Wolf wrote:
Am 14.06.2016 um 17:25 hat Denis V. Lunev geschrieben:
Block commit of the active image to the backing store on a slow disk
could never end. For example with the guest with the following loop
     while true; do
         dd bs=1k count=1 if=/dev/zero of=x
running above slow storage could not complete the operation with a
resonable amount of time:
     virsh blockcommit rhel7 sda --active --shallow
     virsh qemu-monitor-event
     virsh qemu-monitor-command rhel7 \
           "arguments":{"device":"drive-scsi0-0-0-0"} }'
     virsh qemu-monitor-event
Completion event is never received.

This problem could not be fixed easily with the current architecture. We
should either prohibit guest writes (making dirty bitmap dirty) or switch
to the sycnchronous scheme.

This series switches driver mirror to synch scheme. Actually we can have
something more intelligent and switch to sync mirroring just after
the first pass over the bitmap. Though this could be done relatively
easily during discussion. The most difficult things are here.

The set also adds some performance improvements dealing with
known-to-be-zero areas.
I only read the cover letter and had a quick look at the patch doing the
actual switch, so this is by far not a real review, but I have a few
general comments anway:

First of all, let's make sure we're all using the same terminology. In
past discussions about mirror modes, we distinguished active/passive and

* An active mirror mirrors requests immediately when they are made by
   the guest. A passive mirror just remembers that it needs to mirror
   something and does it whenever it wants.

* A synchronous mirror completes the guest request only after the data
   has successfully been written to both the live imaeg and the target.
   An asynchronous one can complete the guest request before the mirror
   I/O has completed.

In these terms, the currently implemented mirror is a passive
asynchronous one. If I understand correctly, what you are doing in this
series is to convert it unconditionally to an active asynchronous one.

The "unconditionally" part is my first complaint: The active mirror does
potentially a lot more I/O, so it's not clear that you want to use it.
This should be the user's choice. (We always intended to add an active
mirror sooner or later, but so far nobody needed it desperately enough.)
this sounds reasonable

The second big thing is that I don't want to see new users of the
notifiers in I/O functions. Let's try if we can't add a filter
BlockDriver instead. Then we'd add an option to set the filter node-name
in the mirror QMP command so that the management tool is aware of the
node and can refer to it.
this will be much more difficult to implement at my experience. Can you
share more details why filters are bad?

If we don't do this now, we'll have to introduce it later and can't be
sure that the management tool knows about it. This would complicate
things quite a bit because we would have to make sure that the added
node stays invisible to the management tool.

I think these two things are the big architectural questions. The rest
is hopefully more or less implementation details.
I completely agree with you.

We have the following choices:
1. implement parameter to use 'active'/'passive' mode from the very beginning 2. switch to 'active' mode upon receiving "block-job-complete" command unconditionally 3. switch to 'active' mode upon receiving "block-job-complete" command with proper parameter 4. switch to 'active' mode after timeout (I personally do not like this option)

I think that choices 1 and 3 do not contradict each other and
could be implemented to gather.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]