qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] util/qemu-sockets: make keep-alive enabled by default


From: Markus Armbruster
Subject: Re: [PATCH 2/2] util/qemu-sockets: make keep-alive enabled by default
Date: Thu, 09 Jul 2020 13:40:19 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)

"Denis V. Lunev" <den@openvz.org> writes:

> On 7/9/20 11:29 AM, Daniel P. Berrangé wrote:
>> On Wed, Jul 08, 2020 at 10:15:39PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>>> Keep-alive won't hurt, let's try to enable it even if not requested by
>>> user.
>> Keep-alive intentionally breaks TCP connections earlier than normal
>> in face of transient networking problems.
>>
>> The question is more about which type of pain is more desirable. A
>> stall in the network connection (for a potentially very long time),
>> or an intentionally broken socket.
>>
>> I'm not at all convinced it is a good idea to intentionally break
>> /all/ QEMU sockets in the face of transient problems, even if the
>> problems last for 2 hours or more. 
>>
>> I could see keep-alives being ok on some QEMU socket. For example
>> VNC/SPICE clients, as there is no downside to proactively culling
>> them as they can trivially reconnect. Migration too is quite
>> reasonable to use keep alives, as you generally want migration to
>> run to completion in a short amount of time, and aborting migration
>> needs to be safe no matter what.
>>
>> Breaking chardevs or block devices or network devices that use
>> QEMU sockets though will be disruptive. The only solution once
>> those backends have a dead socket is going to be to kill QEMU
>> and cold-boot the VM again.
>
> nope, and this is exactly what we are trying to achive.
>
> Let us assume that QEMU NBD is connected to the
> outside world, f.e. to some HA service running in
> other virtual machine. Once that far away VM is
> becoming dead, it is re-started on some other host
> with the same IP.
>
> QEMU NBD has an ability to reconnect to this same
> endpoint and this process is transient for the guest.
>
> This is the workflow we are trying to improve.
>
> Anyway, sitting over dead socket is somewhat
> which is not productive. This is like NFS hard and
> soft mounts. In hypervisor world using hard mounts
> (defaults before the patch) leads to various non
> detectable deadlocks, that is why we are proposing
> soft with such defaults.
>
> It should also be noted that this is more consistent
> as we could face the problem if we perform write
> to the dead socket OR we could hang forever, thus
> the problem with the current state is still possible.
> With new settings we would consistently observe
> the problem.

Daniel's point remains valid: keep-alive makes sense only for sockets
where we can recover from connection breakage.  When graceful recovery
is impossible, we shouldn't aggressively break unresponsive connections,
throwing away the chance (however slim) of them becoming responsive
again.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]