qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Re: [PATCH v3 0/5] Support for datapath sw


From: si-wei liu
Subject: Re: [Qemu-devel] [virtio-dev] Re: [PATCH v3 0/5] Support for datapath switching during live migration
Date: Tue, 8 Jan 2019 20:55:35 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0



On 1/7/2019 6:25 PM, Michael S. Tsirkin wrote:
On Mon, Jan 07, 2019 at 05:45:22PM -0800, si-wei liu wrote:
On 01/07/2019 03:32 PM, Michael S. Tsirkin wrote:
On Mon, Jan 07, 2019 at 05:29:39PM -0500, Venu Busireddy wrote:
Implement the infrastructure to support datapath switching during live
migration involving SR-IOV devices.

1. This patch is based off on the current VIRTIO_NET_F_STANDBY feature
     bit and MAC address device pairing.

2. This set of events will be consumed by userspace management software
     to orchestrate all the hot plug and datapath switching activities.
     This scheme has the least QEMU modifications while allowing userspace
     software to build its own intelligence to control the whole process
     of SR-IOV live migration.

3. While the hidden device model (viz. coupled device model) is still
     being explored for automatic hot plugging (QEMU) and automatic datapath
     switching (host-kernel), this series provides a supplemental set
     of interfaces if management software wants to drive the SR-IOV live
     migration on its own. It should not conflict with the hidden device
     model but just offers simplicity of implementation.


Si-Wei Liu (2):
    vfio-pci: Add FAILOVER_PRIMARY_CHANGED event to shorten downtime during 
failover
    pci: query command extension to check the bus master enabling status of the 
failover-primary device

Sridhar Samudrala (1):
    virtio_net: Add VIRTIO_NET_F_STANDBY feature bit.

Venu Busireddy (2):
    virtio_net: Add support for "Data Path Switching" during Live Migration.
    virtio_net: Add a query command for FAILOVER_STANDBY_CHANGED event.

---
Changes in v3:
    Fix issues with coding style in patch 3/5.

Changes in v2:
    Added a query command for FAILOVER_STANDBY_CHANGED event.
    Added a query command for FAILOVER_PRIMARY_CHANGED event.
Hmm it looks like all feedback I sent e.g. here:
https://patchwork.kernel.org/patch/10721571/
got ignored.

To summarize I suggest reworking the series adding a new command along
the lines of (naming is up to you):

query-pci-master - this returns status for a device
                   and enables a *single* event after
                   it changes

and then removing all status data from the event,
just notify about the change and *only once*.
Why removing all status data from the event?
To make sure users do not forget to call query-pci-master to
re-enable more events.
IMO the FAILOVER_PRIMARY_CHANGED event is on the performance path, it's an overkill to enforce round trip query for each event in normal situations.
It does not hurt to keep them
as the FAILOVER_PRIMARY_CHANGED event in general is of pretty low-frequency.
A malicious guest can make it as frequent as it wants to.
OTOH there is no way to limit.
Will throttle the event rate (say, limiting to no more than 1 event per second) a way to limit (as opposed to control guest behavior) ? The other similar events that apply rate limiting don't suppress event emission until the next query at all. Doing so would just cause more events missing. As stated in the earlier example, we should give guest NIC a chance to flush queued packets even if ending state is same between two events.

As can be seen other similar low-frequent QMP events do have data carried
over.

As this event relates to datapath switching, there's implication to coalesce
events as packets might not get a chance to send out as nothing would ever
happen when  going through quick transitions like
disabled->enabled->disabled. I would allow at least few packets to be sent
over wire rather than nothing. Who knows how fast management can react and
consume these events?

Thanks,
-Siwei
OK if it's so important for latency let's include data in the event.
Please add comments explaining that you must always re-run query
afterwards to make sure it's stable and re-enable more events.
I can add comments describing why we need to carry data in the event, and apply rate limiting to events. But I don't follow why it must suppress event until next query.


Thanks,
-Siwei



        

upon event management does query-pci-master
and acts accordingly.




   hmp.c                          |   5 +++
   hw/acpi/pcihp.c                |  27 +++++++++++
   hw/net/virtio-net.c            |  42 +++++++++++++++++
   hw/pci/pci.c                   |   5 +++
   hw/vfio/pci.c                  |  60 +++++++++++++++++++++++++
   hw/vfio/pci.h                  |   1 +
   include/hw/pci/pci.h           |   1 +
   include/hw/virtio/virtio-net.h |   1 +
   include/net/net.h              |   2 +
   net/net.c                      |  61 +++++++++++++++++++++++++
   qapi/misc.json                 |   5 ++-
   qapi/net.json                  | 100 
+++++++++++++++++++++++++++++++++++++++++
   12 files changed, 309 insertions(+), 1 deletion(-)
---------------------------------------------------------------------
To unsubscribe, e-mail:address@hidden
For additional commands, e-mail:address@hidden

---------------------------------------------------------------------
To unsubscribe, e-mail:address@hidden
For additional commands, e-mail:address@hidden




reply via email to

[Prev in Thread] Current Thread [Next in Thread]