qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/3] hw/net/spapr_llan: Fix receive buffer handl


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH 2/3] hw/net/spapr_llan: Fix receive buffer handling for better performance
Date: Sun, 20 Mar 2016 15:21:24 +1100
User-agent: Mutt/1.5.24 (2015-08-30)

On Fri, Mar 18, 2016 at 08:56:56AM +0100, Thomas Huth wrote:
> On 17.03.2016 23:33, David Gibson wrote:
> > On Thu, Mar 17, 2016 at 04:15:38PM +0100, Thomas Huth wrote:
> >> On 17.03.2016 08:30, Thomas Huth wrote:
> >>> On 17.03.2016 07:23, David Gibson wrote:
> >>>> On Wed, Mar 16, 2016 at 01:16:50PM +0100, Thomas Huth wrote:
> >>>>>
> >>>>> This patch introduces an alternate way of handling the receive
> >>>>> buffers of the spapr-vlan device, resulting in much better
> >>>>> receive performance for the guest.
> >> [...]
> >>>>> +/**
> >>>>> + * Enqueuing receive buffer by adding it to one of our receive buffer 
> >>>>> pools
> >>>>> + */
> >>>>> +static target_long spapr_vlan_add_rxbuf_to_pool(VIOsPAPRVLANDevice 
> >>>>> *dev,
> >>>>> +                                                target_ulong buf)
> >>>>> +{
> >>>>> +    int size = VLAN_BD_LEN(buf);
> >>>>> +    int pool;
> >>>>> +
> >>>>> +    pool = spapr_vlan_get_rx_pool_id(dev, size);
> >>>>> +
> >>>>> +    /* No matching pool found? Try to create a new one */
> >>>>> +    if (pool < 0) {
> >>>>> +        for (pool = RX_MAX_POOLS - 1; pool >= 0 ; pool--) {
> >>>>
> >>>> I don't think this loop actually accomplishes anything.  Either the
> >>>> last slot is free, in which case you use it, then sort into place, or
> >>>> it's not, in which case you've hit the maximum number of buffer pools.
> >>>
> >>> Oh, you're right. Well spotted! I'll rework my patch to do it without
> >>> that loop.
> >>
> >> Wait, no, there was a case where this loop is actually really required:
> >>
> >> 1) All pools are in use and filled with at least one BD
> >> 2) User in the guest suddenly decides to change the buffer size of
> >>    one of the pools in the /sys fs of the guest.
> >> 3) Guest driver tries to add buffers with a new size that do not
> >>    match any size of one of the pools in the host
> >> 4) After the pool on the host runs empty which contained the BDs with
> >>    the size that is not in use anymore, we should recycle that pool
> >>    for the buffers with the new size instead. Since that buffer pool
> >>    might not be at the end of the list, we've got to scan all buffers
> >>    here to make sure we find it.
> >>
> >> So I think the for-loop should stay as it is.
> > 
> > Ah, good point.  I think I was assuming that the pools got sorted when
> > one was emptied as well, but they're not and I suspect it's not a good
> > idea to do so.
> > 
> > Hmm.. I wonder if there's a brief way of explaining the above to put
> > in the comment.
> 
> Something like:
> 
>     /*
>      * If the guest used all pools, but changed the size of one pool
>      * inbetween, we might need to recycle that pool here (if it has
>      * already been emptied). Thus we need to scan all buffer pools
>      * here, not only the last one (which has the highest probability
>      * of being empty)
>      */
> 
> ?
> 
> Or is that too verbose already?

Eh, it's written might as well throw it in.


-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]