qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] top(1) utility implementation in QEMU


From: Fam Zheng
Subject: Re: [Qemu-devel] top(1) utility implementation in QEMU
Date: Thu, 29 Sep 2016 10:45:30 +0800
User-agent: Mutt/1.7.0 (2016-08-17)

On Mon, 09/26 17:28, Daniel P. Berrange wrote:
> On Mon, Sep 26, 2016 at 07:14:33PM +0530, prashanth sunder wrote:
> > Hi All,
> > 
> > Summary of the discussion and different approaches we had on IRC
> > regarding a top(1) tool in qemu
> > 
> > Implement unique naming for all event loop resources.  Sometimes a
> > string literal can be used but other times the unique name needs to be
> > generated at runtime (e.g. filename for an fd).
> > 
> > Approach 1)
> > For a built-in QMP implementation:
> > We have callbacks from fds, BHs and Timers
> > So everytime one of them is registered - we add them to the list(what
> > we see through QMP)
> > and when they are unregistered - we remove them from the list.
> > Ex: aio_set_fd_handler(fd, NULL, NULL, NULL) - unregistering an fd -
> > will remove the fd from the list.
> > 
> > QMP API:
> > set-event-loop-profiling enable=on/off
> > [interval=seconds] [iothread=name] and it emits a QMP event with
> > [{name, counter, time_elapsed}]
> > 
> > Pros:
> > It works on all systems.
> > Cons:
> > Information present inside glib is exposed only via systemtap tracing
> > - these will not be available via QMP.
> > For example - I/O in chardevs, network IO etc
> 
> 
> There's other downsides to QMP approach
> 
>  - Emitting data via QMP will change the behaviour of the system
>    itself, since QMP will trigger usage of the main event loop
>    which is the thing being traced. The degree of disturbance
>    will depend on the interval for emitting events

Yes, but compared to a guest that is busy enough to be analyzed with qemu-top,
I don't think this can be a high degree, even it's at a few events per second.

> 
>  - If the interval is small and you're monitoring more than one
>    guest at a time, then the overhead of QMP could start to get
>    quite significant across the host as a whole. This was
>    mentioned at the summit wrt existing I/O stats expose by
>    QEMU for block / net device backends.

qemu-top is supposed to run only in foreground when human attends. So I'm not
concerned about the system wide overall overhead.

> 
>  - The 'top' tool does not actually have direct access to
>    QMP for any libvirt guests and we've unlikely to want to
>    expose such QMP events via libvirt in any kind of supported
>    API, as they're very use-case specific in design. So at best
>    the app would have to use libvirt QMP passthrough which is
>    acceptable for developer / test environments, but not
>    something that's satisfactory for production deployments.

Just another idea: my original though on how to send statistics to 'qemu-top',
was a specialized channel like a socket with a minimized protocol (e.g. a
mini-QMP, with only whitelisted commands, or an event-only QMP, or simply in an
ad-hoc format).

> 
> > Approach 2)
> > Using Trace:
> > Add trace event for each type of event loop resource (timer, fd, bh,
> > etc) in order to see when a resource fires.
> > Write top(1)-like SystemTap script to get data from the trace backend.
> > 
> > Pros:
> > No performance overhead using trace
> 
> Nothing is zero overhead, but more specifically it would avoid
> the problem of the "top" tool data transport interfering with
> the very data it is trying to measure from the event loop.
> 
> It also makes it easier to pull in data other sources. For example
> you don't need to extend QMP for each new bit of internal state/data
> that the top tool wants access to. You can get access to data that
> QEMU doesn't have, such as in glib, or even in the kernel.

My expectation to do this with SystemTap only is optimistic: once the trace
events are there, the script shouldn't be complicated at all, and it will be
useful anyway because of the glib advantage. Probably something worth to do
anyway?

> 
> > 
> > Cons:
> > The data available from trace depends on the trace-backend that qemu
> > is configured with.
> > It is dependent on availability of SystemTap and is backend specific
> > 
> > Approach 3)
> > Use Trace and extract trace backend data through QMP

Like Daniel, I don't think this makes much sense.

> > 
> > Pros:
> > No performance overhead using trace
> 
> Not sure why you're claiming that - anything that feeds trace
> data over QMP is going to have a potentially significant effect
> as it'll send traffic through the event loop which is what is
> being analysed.
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|
> 

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]