qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/2] virtio-serial: report frontend connection s


From: Laszlo Ersek
Subject: Re: [Qemu-devel] [PATCH 1/2] virtio-serial: report frontend connection state via monitor
Date: Fri, 30 May 2014 13:44:38 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0

On 05/29/14 22:57, Eric Blake wrote:
> On 05/29/2014 02:36 PM, Laszlo Ersek wrote:
> 
>> Emitting the events unconditionally shouldn't hurt anything, I think. I
>> added the property for two reasons:
>>
>> - I was sure someone would ask for it. :)
> 
> Someone still might :)  But it may be a pre-mature optimization.
> 
>>
>> - The property is device-level. The example I gave in the blurb uses
>> -global with qemu:arg for simplicity, but the intent is that libvirt set
>> it only for the one (or few) virtio-serial port(s) where it actually
>> intends to communicate with the guest agent(s). Libvirt can set up
>> several virtio-serial ports (to be read & written by other programs on
>> the host side via their respective chardevs (*)), and if libvirt doesn't
>> connect to those, then qemu shouldn't spam it with uninteresting events.
>> (If libvirt does set the property for more than one virtio-serial port,
>> then it can distinguish the ports by the ids carried in the events.)
> 
> Yes, libvirt would be matching the id carried in the event message
> before deciding whether to act or ignore an event, particularly if the
> events are unconditional.
> 
>>
>> (*) In fact I regularly use this feature: it lets me hack on qga and
>> test it from the host side (with "socat" or otherwise), without libvirt
>> "interfering" with the traffic, but nonetheless setting up the unix
>> domain socket for me, which is nice.
> 
> You do make a good point that turning events on or off per-device does
> some filtering and thus reduces the workload of libvirt - but a
> micro-optimization may not be worth the complexity since the event
> doesn't happen often in the common case (in two senses - how common is
> it for someone to plug in a channel with libvirt other than the monitor
> or guest agent; and how common is it for a guest to open/close a virtio
> serial device).

OK, I'll drop the property (unless someone asks for it before v2).

>>
>>>
>>> Also, since these events are triggered in relation to guest activity, I
>>> think they need to be rate-limited (a guest that repeatedly opens and
>>> closes the device as fast as possible should not cause an event storm).
>>
>> I did notice some throttling code in the event emission code... Let me see.
>>
>> set_guest_connected() [hw/char/virtio-console.c]
>>   monitor_protocol_event() [monitor.c]
>>     monitor_protocol_event_queue()
>>
>> Ah okay. So rate limiting is indeed an attribute of the event. I didn't
>> know that. Apparently, the rates are configured statically, in
>> monitor_protocol_event_init() [monitor.c]. Any idea what limit I should set?
> 
> Probably set it to default to the same rate limit as we have for the
> memballoon event, since that is another example of a guest-triggered
> virtio event.  Libvirt has code to tweak the
> guest-stats-polling-interval QOM property when it wants a throttling
> period faster or slower than the default (which I think is 1 second?).

Hm, regarding balloon, those are two separate things.

(a) The balloon stats polling interval is changed with 'qom-set
guest-stats-polling-interval', as you say; see
"docs/virtio-balloon-stats.txt".

However those stats are not reported as monitor events; "[t]o retrieve
those stats, clients have to query the guest-stats property". The
document names 'qom-get guest-stats'. (I'm omitting the path argument.)

(b) The balloon-related even that is throttled in
monitor_protocol_event_init() [monitor.c], ie. QEVENT_BALLOON_CHANGE, is
for something else:

virtio_config_writeb() [hw/virtio/virtio.c]
  virtio_balloon_set_config() [hw/virtio/virtio-balloon.c]
                              via VirtioDeviceClass.set_config
    qemu_balloon_changed() [balloon.c]
      monitor_protocol_event(QEVENT_BALLOON_CHANGE) [monitor.c]
        ...

That is, QEVENT_BALLOON_CHANGE is emitted when the guest changes the
virtio-balloon device's configuration. ("Configuration" in the virtio
sense; see the virtio_balloon_config.actual field -- "Number of pages
we've actually got in balloon".) IOW, QEVENT_BALLOON_CHANGE is emitted
when the guest moves some pages to or from the balloon (== returns or
takes pages to/from the host == increases or decreases "actual" to match
"num_pages").

Anyway, we don't need too many balloon details here, I'm just saying
that QEVENT_BALLOON_CHANGE, which is throttled in
monitor_protocol_event_init(), is separate from the QOM property.

The rate limit of QEVENT_BALLOON_CHANGE is static (a compile time
constant); it is 1 second (1000 msecs). All other events that are
controlled have the same limit (see monitor_protocol_event_init()), so I
think I should use that limit as well for the VSERPORT events.

> For this particular event, libvirt will probably want to expose to
> management whether the agent is responsive or not (especially since
> libvirt is adding even more agent commands such as virDomainFSFreeze),
> so just like memballoon, libvirt will probably also expose a way to
> tweak the throttling period in the off chance that the default wasn't
> good enough.  But as long as the knob is there, qemu doesn't really have
> to worry about much else.

Hence, that knob isn't there; not even for balloon. (The balloon knob
that *is* there is for something else, not events -- statistics
collection that you can query with a separate QMP command.)

>>> Do we _really_ need two separate events?  Why not just one event
>>> VSERPORT_CHANGE, along with an additional data member 'open':'bool' that
>>> is true when the guest opened the port, and false when the guest closed
>>> the port?
>>
>> That's a valid remark entirely, and I did contemplate the question, but
>> other events seem to exist for CONNECT and DISCONNECT cases separately,
>> already:
>>
>> - SPICE_CONNECTED, SPICE_DISCONNECTED -- even documented together
>> - VNC_CONNECTED, VNC_DISCONNECTED -- here too the accompanying data
>>   structure seems to be identical between connect & disconnect
>>
>> I just wanted to follow that pattern, but I don't insist.
> 
> Anyone else have an opinion?
> 
> You made me think more about it.  One event per state change doesn't
> scale - the moment you add more states, you have to add more events; but
> until management apps write new event handlers, the new state is
> effectively invisible.  On the other hand, a single event with
> information about the latest state is nicer, in that the existing
> management event handler will automatically start seeing the new state
> (of course, it implies that the event handlers have to be written to
> gracefully handle the addition of a new state value that was not
> documented at the time the management app wrote the handeler).  So based
> on _that_ argument, I'm now leaning 60:40 towards using 1 event.

OTOH, I believe that the 'virsh qemu-monitor-event' command that you
recommended before will have a harder time filtering out events that may
be "unwanted" in a particular case. With --regex you can provide an
extended regex that uses alternatives, but if the event name is the
same, you can't tell them apart. Grepping the libvirtd debug messages
for different event names is easier as well.

> 
> (Or put another way, I think the VNC_{DIS,}CONNECTED events are not
> going to scale well if VNC introduces a third state, and that SPICE_ was
> just blindly copying VNC_ without thinking about the ramifications)

Oh, those third events already exist :) They're called VNC_INITIALIZED
and SPICE_INITIALIZED.

SPICE_INITIALIZED carries more fields than SPICE_CONNECTED and
SPICE_DISCONNECTED.

For VNC, VNC_DISCONNECTED and VNC_INITIALIZED seem to carry the same
data structure.

In general I don't think that scaling is a serious issue here. New
states are not a quantitative but a qualitative thing; a new state means
new code / new semantics, so they shouldn't be introduced at a rate that
it causes a scaling problem.

I'm leaning 60:40 against using 1 event :)

If the above didn't convince you that 2 events are (slightly) better,
then I'll merge them easily.

> 
>> Anyway, it's of course easier to drop the property, so if you think that
>> controlling the event on the level of the individual virtio-serial port
>> is overkill, I can remove it.
> 
> You may want to wait for other reviewers to chime in, but in my mind
> simpler is better.

OK. Let's see what others think. Thank you very much for your remarks,
I'll address them in v2!

Laszlo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]