qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC][PATCH v3 07/21] virtproxy: add read handler for c


From: Anthony Liguori
Subject: Re: [Qemu-devel] [RFC][PATCH v3 07/21] virtproxy: add read handler for communication channel
Date: Tue, 16 Nov 2010 17:17:42 -0600
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.15) Gecko/20101027 Lightning/1.0b1 Thunderbird/3.0.10

On 11/15/2010 07:16 PM, Michael Roth wrote:
Handle data coming in over the channel as VPPackets: Process control
messages and forward data from remote client/server connections to the
appropriate server/client FD on our end. We also provide here a helper
function to process a stream of packets from the channel.

Signed-off-by: Michael Roth<address@hidden>
---
  virtproxy.c |   96 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  virtproxy.h |    3 ++
  2 files changed, 99 insertions(+), 0 deletions(-)

diff --git a/virtproxy.c b/virtproxy.c
index 770b57b..091a223 100644
--- a/virtproxy.c
+++ b/virtproxy.c
@@ -33,6 +33,7 @@
  #define VP_SERVICE_ID_LEN 32    /* max length of service id string */
  #define VP_PKT_DATA_LEN 1024    /* max proxied bytes per VPPacket */
  #define VP_CONN_DATA_LEN 1024   /* max bytes conns can send at a time */
+#define VP_CHAN_DATA_LEN 4096   /* max bytes channel can send at a time */
  #define VP_MAGIC 0x1F374059

  /* listening fd, one for each service we're forwarding to remote end */
@@ -152,6 +153,8 @@ static QemuOptsList vp_socket_opts = {
      },
  };

+static void vp_channel_read(void *opaque);
+
  static int vp_channel_send_all(VPDriver *drv, uint8_t *buf, int count)
  {
      int ret;
@@ -263,3 +266,96 @@ static void vp_channel_accept(void *opaque)
      /* dont accept anymore connections until channel_fd is closed */
      vp_set_fd_handler(drv->listen_fd, NULL, NULL, NULL);
  }
+
+/* process a stream of packets coming in from the channel */
+int vp_handle_packet_buf(VPDriver *drv, const void *buf, int count)
+{
+    VPPacket pkt;
+    int ret, buf_offset;
+    char *pkt_ptr;
+    const char *buf_ptr;
+
+    if (drv->buflen + count>= sizeof(VPPacket)) {
+        TRACE("initial packet, drv->buflen: %d", drv->buflen);
+        pkt_ptr = (char *)&pkt;
+        memcpy(pkt_ptr, drv->buf, drv->buflen);
+        pkt_ptr += drv->buflen;
+        memcpy(pkt_ptr, buf, sizeof(VPPacket) - drv->buflen);
+        /* handle first packet */
+        ret = vp_handle_packet(drv,&pkt);
+        if (ret != 0) {
+            LOG("error handling packet");
+        }
+        /* handle the rest of the buffer */
+        buf_offset = sizeof(VPPacket) - drv->buflen;
+        drv->buflen = 0;
+        buf_ptr = buf + buf_offset;
+        count -= buf_offset;
+        while (count>  0) {
+            if (count>= sizeof(VPPacket)) {
+                /* handle full packet */
+                TRACE("additional packet, drv->buflen: %d", drv->buflen);
+                memcpy((void *)&pkt, buf_ptr, sizeof(VPPacket));
+                ret = vp_handle_packet(drv,&pkt);
+                if (ret != 0) {
+                    LOG("error handling packet");
+                }
+                count -= sizeof(VPPacket);
+                buf_ptr += sizeof(VPPacket);
+            } else {
+                /* buffer the remainder */
+                TRACE("buffering packet, drv->buflen: %d", drv->buflen);
+                memcpy(drv->buf, buf_ptr, count);
+                drv->buflen = count;
+                break;
+            }
+        }
+    } else {
+        /* haven't got a full VPPacket yet, buffer for later */
+        TRACE("buffering packet, drv->buflen: %d", drv->buflen);
+        memcpy(drv->buf + drv->buflen, buf, count);
+        drv->buflen += count;
+    }
+    return 0;
+}
+
+/* read handler for communication channel
+ *
+ * de-multiplexes data coming in over the channel. for control messages
+ * we process them here, for data destined for a service or client we
+ * send it to the appropriate FD.
+ */
+static void vp_channel_read(void *opaque)
+{
+    VPDriver *drv = opaque;
+    int count, ret;
+    char buf[VP_CHAN_DATA_LEN];
+
+    TRACE("called with opaque: %p", drv);
+
+    count = read(drv->channel_fd, buf, sizeof(buf));
+
+    if (count == -1) {

I think you notice this later in your series but this is the first indication that something is fundamentally wrong.

In a tool like this, you should be checking for EAGAIN everywhere and handling it gracefully but you assume you have synchronous fds everywhere. This creates the following scenario:

In the host we have process Alice and process Mary. They both communicate with process Bob and process Joe on the guest respectively through virtproxy.

Bob and Joe send packets to virtproxy, and virtproxy combines the packets into a single channel which it then sends to QEMU for QEMU to decode and sent back out to Alice and Mary. Classic mux/demux.

Bob's communication with Alice is totally independent of Mary's communication with Joe.

However, because you multiplex synchronously, if virtproxy tries to send a packet from Mary to Joe and the socket buffer between virtproxy and Joe is full, then virtproxy blocks. This means if Alice tries to send a message to Bob, the fact that Joe is not responding blocks their communication creating an artificial dependency between the two communications channels.

It's not just as simple as being async and queueing, because you can't queue in memory indefinitely so at some point in time, you simply have to stop reading packets.

Now imagine that Joe's socket buffer is full because he's talking to Bob through some other mechanism. Joe being blocked on Bob is no problem because Bob has nothing to do with Alice and Joe's communication stream. Eventually, Bob will get Joe what he needs and that will unblock Joe and let Alice's communication with Joe continue.

Now imagine that Bob is blocking Joe because he's waiting on a message from Mary. This should be okay, Mary's communication with Bob *should* be independent on anything and as soon as Mary sends the message to Bob, Bob can talk to Joe, and Joe can talk to Mary.

Except, in virtproxy, the fact that Alice cannot talk to Joe blocks Mary from talking to Bob which creates a dead lock.

To be honest, there's no simple solution. This is a classic queuing problem. You need some form of congestion control to fix this. That means virtproxy needs to be able to say to Alice that Joe is not ready to communicate right now and then let her know when she can resume communication. That frees up the shared resource for Mary and Bob to use.

The fundamental problem here is that virtproxy is too generic. It's trying to tackle too hard of a problem. This isn't a problem with virtio-serial because virtio-serial has a dedicated queue for each port which allows each port to have it's own back off mechanism.

You can eliminate this problem by doing the following:

1) Have virtagent use two virtio-serial ports.

2) Have virtagent multiplex on it's own when given a single port. Yes, the problem still presents itself but you've limited the communication scope which means you can practical avoid any deadlocks. You only have two peers in the channel: qemu and virtagent. There communication involves the following:

QEMU->virtagent RPC
- QEMU wants to send an RPC request. Until this is entirely completed, it will never allow another request to be sent - virtagent is waiting to receive an RPC request, it gets a packet and sees that it's a request
- virtagent processes the request, and sends back a response
- QEMU receives response, processes

virtagent->QEMU RPC
- Same as above with roles reversed

The only thing you need to handle is if QEMU tries to send a request to virtagent while virtagent simultaneous tries to send QEMU a request. This is simple to handle though because you are allowing one RPC request to be queued on either end at a given time. This is really the key to success, by the nature of the communication, we can demultiplex into finite buffers.

Regards,

Anthony Liguori



reply via email to

[Prev in Thread] Current Thread [Next in Thread]