qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
Date: Mon, 18 Aug 2014 17:14:10 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.0

On 08/18/2014 04:32 PM, zhanghailiang wrote:
> On 2014/8/18 14:55, Jason Wang wrote:
>> On 08/18/2014 12:46 PM, zhanghailiang wrote:
>>> For all NICs(except virtio-net) emulated by qemu,
>>> Such as e1000, rtl8139, pcnet and ne2k_pci,
>>> Qemu can still receive packets when VM is not running.
>>> If this happened in *migration's* last PAUSE VM stage,
>>> The new dirty RAM related to the packets will be missed,
>>> And this will lead serious network fault in VM.
>>>
>>> To avoid this, we forbid receiving packets in generic net code when
>>> VM is not running. Also, when the runstate changes back to running,
>>> we definitely need to flush queues to get packets flowing again.
>>
>> You probably need a better title since it does not cover this change.
>>>
>
> Hmm, you are right, i will modify it, thanks.:)
>
>>> Here we implement this in the net layer:
>>> (1) Judge the vm runstate in qemu_can_send_packet
>>> (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>>> Which will listen for VM runstate changes.
>>> (3) Register a handler function for VMstate change.
>>> When vm changes back to running, we flush all queues in the callback
>>> function.
>>> (4) Remove checking vm state in virtio_net_can_receive
>>>
>>> Signed-off-by: zhanghailiang<address@hidden>
>>> ---
>>>   hw/net/virtio-net.c |  4 ----
>>>   include/net/net.h   |  2 ++
>>>   net/net.c           | 32 ++++++++++++++++++++++++++++++++
>>>   3 files changed, 34 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>> index 268eff9..287d762 100644
>>> --- a/hw/net/virtio-net.c
>>> +++ b/hw/net/virtio-net.c
>>> @@ -839,10 +839,6 @@ static int
>>> virtio_net_can_receive(NetClientState *nc)
>>>       VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>>       VirtIONetQueue *q = virtio_net_get_subqueue(nc);
>>>
>>> -    if (!vdev->vm_running) {
>>> -        return 0;
>>> -    }
>>> -
>>>       if (nc->queue_index>= n->curr_queues) {
>>>           return 0;
>>>       }
>>> diff --git a/include/net/net.h b/include/net/net.h
>>> index ed594f9..a294277 100644
>>> --- a/include/net/net.h
>>> +++ b/include/net/net.h
>>> @@ -8,6 +8,7 @@
>>>   #include "net/queue.h"
>>>   #include "migration/vmstate.h"
>>>   #include "qapi-types.h"
>>> +#include "sysemu/sysemu.h"
>>>
>>>   #define MAX_QUEUE_NUM 1024
>>>
>>> @@ -96,6 +97,7 @@ typedef struct NICState {
>>>       NICConf *conf;
>>>       void *opaque;
>>>       bool peer_deleted;
>>> +    VMChangeStateEntry *vmstate;
>>>   } NICState;
>>>
>>>   NetClientState *qemu_find_netdev(const char *id);
>>> diff --git a/net/net.c b/net/net.c
>>> index 6d930ea..21f0d48 100644
>>> --- a/net/net.c
>>> +++ b/net/net.c
>>> @@ -242,6 +242,29 @@ NetClientState
>>> *qemu_new_net_client(NetClientInfo *info,
>>>       return nc;
>>>   }
>>>
>>> +static void nic_vmstate_change_handler(void *opaque,
>>> +                                       int running,
>>> +                                       RunState state)
>>> +{
>>> +    NICState *nic = opaque;
>>> +    NetClientState *nc;
>>> +    int i, queues;
>>> +
>>> +    if (!running) {
>>> +        return;
>>> +    }
>>> +
>>> +    queues = MAX(1, nic->conf->peers.queues);
>>> +    for (i = 0; i<  queues; i++) {
>>> +        nc =&nic->ncs[i];
>>> +        if (nc->receive_disabled
>>> +            || (nc->info->can_receive&& 
>>> !nc->info->can_receive(nc))) {
>>> +            continue;
>>> +        }
>>> +        qemu_flush_queued_packets(nc);
>>
>> How about simply purge the receive queue during stop? If ok, there's no
>> need to introduce extra vmstate change handler.
>>
>
> I don't know whether it is OK to purge the receive packages, it was
> suggested by Stefan Hajnoczi, and i am waiting for his opinion .:)
>
> I think we still need the extra vmstate change handler, Without the
> change handler, we don't know if the VM will go to stop and the time
> when to call qemu_purge_queued_packets.
>

Or you can do it in do_vm_stop().
>>> +    }
>>> +}
>>> +
>>>   NICState *qemu_new_nic(NetClientInfo *info,
>>>                          NICConf *conf,
>>>                          const char *model,
>>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>>>       nic->ncs = (void *)nic + info->size;
>>>       nic->conf = conf;
>>>       nic->opaque = opaque;
>>> +    nic->vmstate =
>>> qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>>> +                                                    nic);
>>>
>>
>> Does this depend on other vm state change handler to be called first? I
>> mean virtio has its own vmstate_change handler and which seems to be
>> called after this. Is this an issue?
>>
>
> Yes, it is. The check vm state in virtio-net is unnecessary,
> Actually it will prevent the flushing process, this is why we
> do step 4 "Remove checking vm state in virtio_net_can_receive".

How about other handlers (especially kvm/xen specific ones)? If not,
looks like vm_start() is a more safer place since all handlers were
called before.
>
> Besides, i think it is OK to do common things in vmstate_change handler
> of generic net layer and do private things in their own vmstate_change
> handlers. :)

This is true only if there's no dependency. Virtio has a generic vmstate
change handler, a subtle change of your patch is even if vhost is
enabled, during vm start qemu will still process packets since you can
qemu_flush_queued_packets() before vhost_net is started (since virtio
vmstate change handler is called after). So probably we need only do
purging which can eliminate the processing during vm start.
>
>>>       for (i = 0; i<  queues; i++) {
>>>           qemu_net_client_setup(&nic->ncs[i], info, peers[i], model,
>>> name,
>>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>>>           qemu_free_net_client(nc);
>>>       }
>>>
>>> +    qemu_del_vm_change_state_handler(nic->vmstate);
>>>       g_free(nic);
>>>   }
>>>
>>> @@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc,
>>> int len)
>>>
>>>   int qemu_can_send_packet(NetClientState *sender)
>>>   {
>>> +    int vmstat = runstate_is_running();
>>> +
>>> +    if (!vmstat) {
>>> +        return 0;
>>> +    }
>>> +
>>>       if (!sender->peer) {
>>>           return 1;
>>>       }
>>
>>
>> .
>>
>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]