qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC][PATCH v3 07/21] virtproxy: add read handler for c


From: Michael Roth
Subject: Re: [Qemu-devel] [RFC][PATCH v3 07/21] virtproxy: add read handler for communication channel
Date: Wed, 17 Nov 2010 15:43:59 -0600
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.2.12) Gecko/20101027 Thunderbird/3.1.6

On 11/16/2010 05:17 PM, Anthony Liguori wrote:
Except, in virtproxy, the fact that Alice cannot talk to Joe blocks Mary
from talking to Bob which creates a dead lock.

To be honest, there's no simple solution. This is a classic queuing
problem. You need some form of congestion control to fix this. That
means virtproxy needs to be able to say to Alice that Joe is not ready
to communicate right now and then let her know when she can resume
communication. That frees up the shared resource for Mary and Bob to use.

The fundamental problem here is that virtproxy is too generic. It's
trying to tackle too hard of a problem. This isn't a problem with
virtio-serial because virtio-serial has a dedicated queue for each port
which allows each port to have it's own back off mechanism.

So, after a series of half-written rebuttals and working over the proposed scenarios with the solutions I'd had in mind, I think I've finally reached the same conclusion.

The potential for deadlocks can be traced to the use, or re-implementation, of qemu-char.c's send_all(). My thought had been that since we make assumptions there that these send_all()'s to processes will eventually complete, we can make similar assumptions in virtproxy. And if this is not an acceptable assumption, the solution would be a general one that would benefit chardev's in general, rather than something virtproxy-specific: mainly, an async/non-blocking alternative to send_all().

But you're right...whereas with normal chardevs, we have the option of throttling/polling for adequately-sized buffers at the device/virtio level, with virtproxy we'd need to do this for individual communication streams, which would require additional control packets. That, or we'd have to start dropping packets bound for a process when the corresponding write handler's queue was exhausted, which is not something we necessarily have to resort to when using a dedicated channel for a stream.

Probably a bit too much beyond the scope of virtagent... virtproxy was supposed to make things *easier* :)


You can eliminate this problem by doing the following:

1) Have virtagent use two virtio-serial ports.


How bad is this from an end-user/deployment perspective? My first thought would be a shortcut invocation for virtio-serial guests:

./qemu -enable-virtagent

which would be equivalent to:

./qemu
  -chardev virtagent-client
  -chardev virtagent-server
  -device virtio-serial
-device virtserialport,chardev=virtagent-client,name=org.qemu.virtagent-client -device virtserialport,chardev=virtagent-server,name=org.qemu.virtagent-server

And for alternative channels they'd need to do the verbose invocation, which I think works out well since we can't guess a good index for, say, isa-serial, or if we could, we'd still need some way to convey what it is so they can go start the agent accordingly.

And implementing virtagent-* as a simple chardev wrapper for -chardev socket lets us set up default paths for the client/server sockets that QEMU talks to send/receive RPCs, as well chardev IDs and invoking virtagent init routines. Potentially we can key into events to infer when the guest is connected to the other end as well.

Guest agent invocation would then be, for virtio-serial guests:

./virtagent

which would be equivalent to:

./virtagent
  --client virtio-serial:/dev/virtio-ports/org.qemu.virtagent-client
  --server virtio-serial:/dev/virtio-ports/org.qemu.virtagent-server

And for alternative channels, i.e. isa-serial, they'd have to do the manual invocation and point to the proper serial ports.

It's flexible, and the virtio-serial case is dead simple.

The major drawback is the potential scarcity of isa-serial ports: 2 out of 4 might be too much to ask for. And I think with a ubiquitous agent sticking with isa-serial to be safe might be a common deployment strategy for a lot of environments.

2) Have virtagent multiplex on it's own when given a single port. Yes,
the problem still presents itself but you've limited the communication
scope which means you can practical avoid any deadlocks. You only have
two peers in the channel: qemu and virtagent. There communication
involves the following:

QEMU->virtagent RPC
- QEMU wants to send an RPC request. Until this is entirely completed,
it will never allow another request to be sent
- virtagent is waiting to receive an RPC request, it gets a packet and
sees that it's a request
- virtagent processes the request, and sends back a response
- QEMU receives response, processes

virtagent->QEMU RPC
- Same as above with roles reversed

The only thing you need to handle is if QEMU tries to send a request to
virtagent while virtagent simultaneous tries to send QEMU a request.
This is simple to handle though because you are allowing one RPC request
to be queued on either end at a given time. This is really the key to
success, by the nature of the communication, we can demultiplex into
finite buffers.

I'll look into this a bit more. But I'm hesitant to revisit a multiplexing solution at this late a stage unless 2 isa-serial ports seems exceedingly prohibitive. If the design seems simple enough I will try to work it into the next RFC, but I think 1) might be more feasible at this point. And we can still potentially work in 2) in a later release.


Regards,

Anthony Liguori

Thanks!

-Mike



reply via email to

[Prev in Thread] Current Thread [Next in Thread]