qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v4 01/15] util: introduce gsource event abst


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC PATCH v4 01/15] util: introduce gsource event abstration
Date: Fri, 19 Apr 2013 13:59:06 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Apr 19, 2013 at 02:52:08PM +0800, liu ping fan wrote:
> On Thu, Apr 18, 2013 at 10:01 PM, Stefan Hajnoczi <address@hidden> wrote:
> > On Wed, Apr 17, 2013 at 04:39:10PM +0800, Liu Ping Fan wrote:
> >> +static gboolean prepare(GSource *src, gint *time)
> >> +{
> >> +    EventGSource *nsrc = (EventGSource *)src;
> >> +    int events = 0;
> >> +
> >> +    if (!nsrc->readable && !nsrc->writable) {
> >> +        return false;
> >> +    }
> >> +    if (nsrc->readable && nsrc->readable(nsrc->opaque)) {
> >> +        events |= G_IO_IN;
> >> +    }
> >> +    if ((nsrc->writable) && nsrc->writable(nsrc->opaque)) {
> >> +        events |= G_IO_OUT;
> >> +    }
> >
> > G_IO_ERR, G_IO_HUP, G_IO_PRI?
> >
> > Here is the select(2) to GCondition mapping:
> > rfds -> G_IO_IN | G_IO_HUP | G_IO_ERR
> > wfds -> G_IO_OUT | G_IO_ERR
> > xfds -> G_IO_PRI
> >
> Does G_IO_PRI only happen on read-in direction?

Yes.

> > In other words, we're missing events by just using G_IO_IN and G_IO_OUT.
> > Whether that matters depends on EventGSource users.  For sockets it can
> > matter.
> >
> I think you mean just prepare all of them, and let the dispatch decide
> how to handle them, right?

The user must decide which events to monitor.  Otherwise the event loop
may run at 100% CPU due to events that are monitored but not handled by
the user.

> >> +void event_source_release(EventGSource *src)
> >> +{
> >> +    g_source_destroy(&src->source);
> >
> > Leaks src.
> >
> All of the mem used by EventGSource are allocated by g_source_new, so
> g_source_destroy can reclaim all of them.

Okay, then the bug is events_source_release() which calls g_free(src)
after g_source_destroy(&src->source).

> >> +EventsGSource *events_source_new(GSourceFuncs *funcs, GSourceFunc 
> >> dispatch_cb, void *opaque)
> >> +{
> >> +    EventsGSource *src = (EventsGSource *)g_source_new(funcs, 
> >> sizeof(EventsGSource));
> >> +
> >> +    /* 8bits size at initial */
> >> +    src->bmp_sz = 8;
> >> +    src->alloc_bmp = g_malloc0(src->bmp_sz >> 3);
> >
> > This is unportable.  alloc_bmp is unsigned long, you are allocating just
> > one byte!
> >
> I had thought that resorting to bmp_sz to guarantee the bit-ops on
> alloc_bmp. And if EventsGSource->pollfds is allocated with 64 instance
> at initialize, it cost too much.   I can fix it with more fine code
> when alloc_bmp's size growing.
> 
> > Please drop the bitmap approach and use a doubly-linked list or another
> > glib container type of your choice.  It needs 3 operations: add, remove,
> > and iterate.
> >
> But as the case for slirp, owning to network's connection and
> disconnection, the slirp's sockets can be dynamically changed quickly.
>   The bitmap approach is something like slab, while glib container
> type lacks such support (maybe using two GArray inuse[], free[]).

Doubly-linked list insertion and removal are O(1).

The linked list can be allocated with g_slice_alloc() which is
efficient.

Iterating linked lists isn't cache-friendly but this is premature
optimization.  I bet the userspace TCP - pulling packets apart - is more
of a CPU bottleneck than a doubly-linked list of fds.

Please use existing data structures instead of writing them from scratch
unless there is a real need (e.g. profiling shows it matters).



reply via email to

[Prev in Thread] Current Thread [Next in Thread]