qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] net: QEMU_NET_PACKET_FLAG_MORE introduced


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH] net: QEMU_NET_PACKET_FLAG_MORE introduced
Date: Mon, 9 Dec 2013 15:55:00 +0200

On Mon, Dec 09, 2013 at 01:42:30PM +0100, Stefan Hajnoczi wrote:
> On Mon, Dec 09, 2013 at 01:14:31PM +0200, Michael S. Tsirkin wrote:
> > On Mon, Dec 09, 2013 at 11:55:57AM +0100, Vincenzo Maffione wrote:
> > > If you don't think adding the new flag support for virtio-net is a good 
> > > idea
> > > (though TAP performance is not affected in every case) we could also make 
> > > it
> > > optional.
> > > 
> > > 
> > > Cheers
> > >   Vincenzo
> > > 
> > 
> > I think it's too early to say whether this patch is benefitial for
> > netmap, too.  It looks like something that trades off latency
> > for throughput, and this is a decision the endpoint (VM) should
> > make, not the network (host).
> > So you should measure with offloads on before you make conclusions about it.
> 
> Just to check my understanding, we're talking about the following kind
> of batching:
> 
>   int num_packets = peek_available_packets(device);
>   while (num_packets-- > 0) {
>       int flags = MORE;
>       if (num_packets == 0) {
>           flags = NONE;
>       }
>       qemu_net_send_packet(..., flags);
>   }
> 
> In other words, this only batches up a single burst of packets.  It
> doesn't introduce timers or blocking calls.

Yes.

> So the effect of batching should be relatively small on latency.  In
> fact, it's almost like sendmmsg(2)/recvmmsg(2) but using a
> one-packet-at-a-time interface.
> 
> Does this sound right?
> 
> Stefan

Why would it be small?  Consider a queue of 256 packets.
You are sending out a single short packet, followed
by a burst of 255 larger packets.
the single packet is not transmitted until qemu completes
processing 255 larger ones.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]