qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Tap: fix vcpu long time io blocking on tap


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH] Tap: fix vcpu long time io blocking on tap
Date: Thu, 17 Jul 2014 13:36:51 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0

On 07/17/2014 11:43 AM, Wangkai (Kevin,C) wrote:
>
>> -----Original Message-----
>> From: Stefan Hajnoczi [mailto:address@hidden
>> Sent: Tuesday, July 15, 2014 11:00 PM
>> To: Wangkai (Kevin,C)
>> Cc: Stefan Hajnoczi; Lee yang; address@hidden;
>> address@hidden
>> Subject: Re: [Qemu-devel] [PATCH] Tap: fix vcpu long time io blocking
>> on tap
>>
>> On Mon, Jul 14, 2014 at 10:44:58AM +0000, Wangkai (Kevin,C) wrote:
>>> Here the detail network:
>>>
>>> +--------------------------------------------+
>>> | The host add tap1 and eth10 to bridge 'br1'|                     +-
>> -------+
>>> | +------------+                             |                     |
>> send  |
>>> | |   VM  eth1-+-tap1 --- bridge --- eth10 --+---------------------+
>>> | | packets|
>>> | +------------+                             |                     |
>> |
>>> +--------------------------------------------+
>>> +--------------------------------------------+ +--------+
>>>
>>> Qemu start vm by virtio, use tap interface, option is:
>>> -net nic,vlan=101, model=virtio -net
>>> tap,vlan=101,ifname=tap1,script=no,downscript=no
>> Use the newer -netdev/-device syntax to get offload support and
>> slightly better performance:
>>
>> -netdev tap,id=tap0,ifname=tap1,script=no,downscript=no \ -device
>> virtio-net-pci,netdev=tap0
>>
>>> And add tap1 and eth10 to bridge br1 in the host:
>>> Brctl addif br1 tap1
>>> Brctl addif br1 eth10
>>>
>>> total recv 505387 time 2000925 us:
>>> means call tap_send once dealing 505387 packets, the packet payload
>>> was 300 bytes, and time use for tap_send() was 2,000,925
>>> micro-seconds, time was measured by record time stamp at function
>> tap_send() start and end.
>>> We just test the performance of VM.
>> That is 150 MB of incoming packets in a single tap_send().  Network rx
>> queues are maybe a few 1000 packets so I wonder what is going on here.
>>
>> Maybe more packets are arriving while QEMU is reading them and we keep
>> looping.  That's strange though because the virtio-net rx virtqueue
>> should fill up (it only has 256 entries).
>>
>> Can you investigate more and find out exactly what is going on?  It's
>> not clear yet that adding a budget is the solution or just hiding a
>> deeper problem.
>>
>> Stefan
> [Wangkai (Kevin,C)] 
>
> Hi Stefan,
>
> I think I have found the problem, why 256 entries virtqueue cannot prevent
> packets receiving.
>
> I have start one SMP guest, which have 2 cores, one core was pending on
> Io, and the other core was receiving the packets, and QEMU filling the
> virtqueue while the guest kernel was moving the packets out from the 
> queue and process.
>
> They were racing, only if the guest got enough packets and receive slower
> than QEMU sending, the virtqueue full, then finish once receive.
I hit the similar issue in the past. Using pktgen to inject packets from
tap to guest directly, then guest was slow to respond any other io event.

This is similar to the tx and we have tx_burst or tx_timer to solve it.
But for rx, probably we could do some limitation in tap_send() like this
patch since e1000 may have the same issue.

Generically, we need some mechanism to guarantee the fairness between
devices to prevent one from starving the others.
>
> And I have tried -netdev/-device syntax start guest again, got very few 
> Improvement.
>
> Regards
> Wangkai
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]