qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] (no subject)


From: Michael Roth
Subject: [Qemu-devel] (no subject)
Date: Wed, 10 Nov 2010 19:30:55 -0600

>From Michael Roth <address@hidden> # This line is ignored.
From: Michael Roth <address@hidden>
Subject: [RFC][PATCH v3 00/11] virtagent: host/guest RPC communication agent
In-Reply-To: 

This set of patches is meant to be applied on top of the recently submitted 
Virtproxy v2 patchset. It can also be obtained at:

git://repo.or.cz/qemu/mdroth.git virtproxy_v2

OVERVIEW:

There are a wide range of use cases motivating the need for a guest agent of 
some sort to extend the functionality/usability/control offered by QEMU. Some 
examples include graceful guest shutdown/reboot and notifications thereof, 
copy/paste syncing between host/guest, guest statistics gathering, file access, 
etc.

Ideally these would all be served by a single, easilly extensible agent that 
can be deployed in a wide range of guests. Virtagent is an XMLRPC server 
integrated into the Virtproxy guest daemon and aimed at providing this type of 
functionality.

CHANGES IN V3:

 - Integrated virtagent invocation into virtproxy chardev. Usage examples below.
 - Consolidated RPC server/client setup into a pair of init routines
 - Fixed buffer overflow in agent_viewfile() and various memory leaks

CHANGES IN V2:

 - All RPC communication is now done using asynchronous/non-blocking read/write 
handlers
 - Previously fork()'d RPC server loop is now integrated into qemu-vp/virtproxy 
i/o loop
 - Cleanups/suggestions from previous RFC

DESIGN:

There are actually 2 RPC servers:

1) a server in the guest integrated into qemu-vp, the Virtproxy guest daemon, 
which handles RPC requests from QEMU
2) a server in the host, integrated into the virtproxy chardev, to handle RPC 
requests sent by the guest agent (mainly for handling asynchronous events 
reported by the agent).

At the Virtagent level, communication is done via standard RPCs (HTTP between 
host and guest). Virtproxy transparently handles transport over a network or 
isa/virtio serial channel, allowing the agent to be deployed on older guests 
which may not support virtio-serial.

Currently there are only 2 RPCs implemented for the guest server (getfile and 
getdmesg), and 0 for the host. Additional RPCs can be added fairly easily, but 
are dependent on feedback from here and elsewhere. ping/status, shutdown, and 
reboot are likely candidates (although the latter 2 will likely require 
asynchronous notifications to the host RPC server to implement reliably).

EXAMPLE USAGE:

The commandline options are a little convoluted right now; this will addressed 
in later revisions.

 - Configure guest agent to talk to host via virtio-serial
    # start guest with virtio-serial/virtproxy/virtagent. for example 
(RHEL6rc1):
    qemu \
    -chardev virtproxy,id=test0,virtagent=on \
    -device virtio-serial \
    -device virtserialport,chardev=test0,name=virtagent0 \
    -monitor stdio
    ...
    # in the guest:
    ./qemu-vp -c virtserial-open:/dev/virtio-ports/virtagent0:- -g
    ...
    # monitor commands
    (qemu) agent_viewdmesg
    [139311.710326] wlan0: deauthenticating from 00:30:bd:f7:12:d5 by local 
choice (reason=3)
    [139323.469857] wlan0: deauthenticating from 00:21:29:cd:41:ee by local 
choice (reason=3)
    ...
    [257683.375646] wlan0: authenticated
    [257683.375684] wlan0: associate with AP 00:30:bd:f7:12:d5 (try 1)
    [257683.377932] wlan0: RX AssocResp from 00:30:bd:f7:12:d5 (capab=0x411 
status=0 aid=4)
    [257683.377940] wlan0: associated
    
    (qemu) agent_viewfile /proc/meminfo
    MemTotal:        3985488 kB
    MemFree:          400524 kB
    Buffers:          220556 kB
    Cached:          2073160 kB
    SwapCached:            0 kB
    ...
    Hugepagesize:       2048 kB
    DirectMap4k:        8896 kB
    DirectMap2M:     4110336 kB

KNOWN ISSUES/PLANS:
 - the client socket that qemu connects to send RPCs is a hardcoded filepath. 
This is unacceptable as the socket is channel/process specific and things will 
break when multiple guests are started.
 - capability negotiation will be needed to handle version/architecture 
differences.
 - proper channel negotiation is needed to avoid hung monitors and such when a 
guest reboots or the guest agent is stopped for whatever reason. additionally, 
a timeout may need to be imposed on the amount of time the http read handler 
can block the monitor.
 - additional host-to-guest RPCs as well as asynchronous notifications via 
guest-to-host RPCs for events such as shutdown/reboot/agent up/agent down





reply via email to

[Prev in Thread] Current Thread [Next in Thread]