qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event


From: Luiz Capitulino
Subject: Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event
Date: Mon, 27 May 2013 09:10:11 -0400

On Mon, 27 May 2013 17:34:25 +0800
Amos Kong <address@hidden> wrote:

> On Fri, May 24, 2013 at 08:51:36AM -0400, Luiz Capitulino wrote:
> > On Fri, 24 May 2013 15:10:16 +0300
> > "Michael S. Tsirkin" <address@hidden> wrote:
> > 
> > > On Thu, May 23, 2013 at 01:26:33PM -0400, Luiz Capitulino wrote:
> > > > On Thu, 23 May 2013 20:18:34 +0300
> > > > "Michael S. Tsirkin" <address@hidden> wrote:
> > > > 
> > > > > On Thu, May 23, 2013 at 11:54:03AM -0400, Luiz Capitulino wrote:
> > > > > > On Thu, 16 May 2013 18:17:23 +0300
> > > > > > "Michael S. Tsirkin" <address@hidden> wrote:
> > > > > > 
> > > > > > > > The
> > > > > > > > existing throttling approach ensures that if the event includes 
> > > > > > > > latest
> > > > > > > > guest information, then the host doesn't even have to do do a 
> > > > > > > > query, and
> > > > > > > > is guaranteed that reacting to the final event will always see 
> > > > > > > > the most
> > > > > > > > recent request.  But most importantly, if the existing 
> > > > > > > > throttling works,
> > > > > > > > why do we have to invent a one-off approach for this event 
> > > > > > > > instead of
> > > > > > > > reusing existing code?
> > > > > > 
> > > > > > Sorry to restart this week old discussion, but I'm now reviewing 
> > > > > > the patch
> > > > > > in question and I dislike how we're coupling the event and the query
> > > > > > command.
> > > > > > 
> > > > > > > Because of the 1st issue above. A large delay because we
> > > > > > 
> > > > > > Has this been measured? How long is this large delay?
> > > > > > 
> > > > > > Also, is it impossible for management to issue query-rx-filter
> > > > > > on a reasonable rate that would also cause the same problems?
> > > > > > IOW, how can we be sure we're fixing anything without trying it
> > > > > > on a real use-case scenario?
> > > > > 
> > > > > Play with priorities, you can make management arbitrarily slow.  It's
> > > > > just not sane to assume any timing guarantees for tasks running on
> > > > > Linux.
> > > > 
> > > > Would you mind to elaborate? I'm not sure I understand how this answers
> > > > my questions.
> > > 
> > > Maybe I don't understand the questions.
> > > You are asking why doesn't usual throttling sufficient?
> > > This was discussed in this thread already.
> > > That's because it would introduce a huge delay if guest
> > > changes the mac too often. People don't except that
> > > changing a mac is a thing the should do slowly.
> 
> > You meant shouldn't?
> > 
> > If I got it correctly, all you want to avoid is to call qobject_from_jsonf()
> > and monitor_protocol_event() in the mac change path, because this will
> > slow down the guest. Did I get it?
> 
> No.
> 
> We use the QMP event to notify management about the mac changing.
> 
> In this thread, we _wrongly_ considered to use qmp approach to delay
> the event for avoiding the flooding.
> 
>   eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 1000);
> 
> Now we have a solution (using a flag to turn on/off the notify) to avoid the
> flooding, only emit the event if we have no un-read event.
> 
> If we want to (flag is on) emit the event, we wish the event be sent ASAP
> (so event_throttle isn't needed).

Unfortunately this doesn't answer my question. I did understand why you're
not using the event throttle API (which is because you don't want to slow down
the guest, not the QMP client).

My point is whether coupling the event with the query command is really
justified or even if it really fixes the problem. Two points:

 1. Coupling them is bad design, and will probably strike back, as we plan
    for a better API for events where events can be disabled

 2. Can you actually show the problem does exist so that we ensure this is
    not premature optimization? Might be a good idea to have this in the
    commit log

> > (which is to couple the event with the query command) is
> > appropriate. We're in user-space already, many things could slow
> > the guest down apart from the event generation.
> > 
> > Two questions:
> > 
> >  1. Do we know how slow (or how many packets are actually dropped)
> >     if the mac is changed too often *and* the event is always sent?
> 
> We always disable interface first, then change the macaddr.
> But we just have patch to allow guest to change macaddr of
> virtio-net when the interface is running.
> 
> | commit 2dcd0cce551983afe2f900125457f10bb5d980ae
> | Author: Jiri Pirko <address@hidden>
> | Date:   Tue Dec 11 15:33:56 2012 -0500
> | 
> |     [virt] virtio_net: allow to change mac when iface is running
> 
> >  2. Does this solution consider what happens if the QMP client does
> >     respond timely to the event by issuing the query-rx-filter
> >     command?
> 
> We assume that the QMP client (management) cares about the mac changing
> event, and will query the latest rx-filter state and sync to macvtap
> device.
> 
> 1) If QMP client respond timely to the event: that's what we expected :)

Won't this slow down the guest? If not, why?

> 
> 2) If QMP client doesn't respond timely to the event: packets might drop.
>    If we change mac when the interface is running, we can accept trivial
>    packets dropping.
> 
> For second condition, we need to test in real environment when libvirt
> finishs the work of processing events.
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]