qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 00/18] Support Multifd for RDMA migration


From: Dr. David Alan Gilbert
Subject: Re: [PATCH v3 00/18] Support Multifd for RDMA migration
Date: Fri, 18 Dec 2020 20:01:42 +0000
User-agent: Mutt/1.14.6 (2020-07-11)

* Zheng Chuan (zhengchuan@huawei.com) wrote:
> Hi, Dave.
> 
> Since qemu 6.0 is open and some patches of this series have been reviewed, 
> might you have time to continue reviewing rest of them ?

Yes, apologies for not getting further; I'll need to attack it again in
the new year;  it's quite hard, since I know the RDMA code, but not the
multifd code that well, and Juan knows the multifd code but not the RDMA
code that well; and it's quite a large series.

Dave

> On 2020/10/25 10:29, Zheng Chuan wrote:
> > 
> > 
> > On 2020/10/24 3:02, Dr. David Alan Gilbert wrote:
> >> * Zheng Chuan (zhengchuan@huawei.com) wrote:
> >>>
> >>>
> >>> On 2020/10/21 17:25, Zhanghailiang wrote:
> >>>> Hi zhengchuan,
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: zhengchuan
> >>>>> Sent: Saturday, October 17, 2020 12:26 PM
> >>>>> To: quintela@redhat.com; dgilbert@redhat.com
> >>>>> Cc: Zhanghailiang <zhang.zhanghailiang@huawei.com>; Chenzhendong (alex)
> >>>>> <alex.chen@huawei.com>; Xiexiangyou <xiexiangyou@huawei.com>; wanghao
> >>>>> (O) <wanghao232@huawei.com>; yubihong <yubihong@huawei.com>;
> >>>>> fengzhimin1@huawei.com; qemu-devel@nongnu.org
> >>>>> Subject: [PATCH v3 00/18] Support Multifd for RDMA migration
> >>>>>
> >>>>> Now I continue to support multifd for RDMA migration based on my 
> >>>>> colleague
> >>>>> zhiming's work:)
> >>>>>
> >>>>> The previous RFC patches is listed below:
> >>>>> v1:
> >>>>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg669455.html
> >>>>> v2:
> >>>>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg679188.html
> >>>>>
> >>>>> As descried in previous RFC, the RDMA bandwidth is not fully utilized 
> >>>>> for over
> >>>>> 25Gigabit NIC because of single channel for RDMA migration.
> >>>>> This patch series is going to support multifd for RDMA migration based 
> >>>>> on multifd
> >>>>> framework.
> >>>>>
> >>>>> Comparsion is between origion and multifd RDMA migration is re-tested 
> >>>>> for v3.
> >>>>> The VM specifications for migration are as follows:
> >>>>> - VM use 4k page;
> >>>>> - the number of VCPU is 4;
> >>>>> - the total memory is 16Gigabit;
> >>>>> - use 'mempress' tool to pressurize VM(mempress 8000 500);
> >>>>> - use 25Gigabit network card to migrate;
> >>>>>
> >>>>> For origin RDMA and MultiRDMA migration, the total migration times of 
> >>>>> VM are
> >>>>> as follows:
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>> |             | NOT rdma-pin-all | rdma-pin-all |
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>> | origin RDMA |       26 s       |     29 s     |
> >>>>> -------------------------------------------------
> >>>>> |  MultiRDMA  |       16 s       |     17 s     |
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>
> >>>>> Test the multifd RDMA migration like this:
> >>>>> virsh migrate --live --multiFd --migrateuri
> >>>>
> >>>> There is no option '--multiFd' for virsh commands, It seems that, we 
> >>>> added this private option for internal usage.
> >>>> It's better to provide testing method by using qemu commands.
> >>>>
> >>>>
> >>> Hi, Hailiang
> >>> Yes, it should be, will update in V4.
> >>>
> >>> Also, Ping.
> >>>
> >>> Dave, Juan.
> >>>
> >>> Any suggestion and comment about this series? Hope this feature could 
> >>> catch up with qemu 5.2.
> >>
> >> It's a bit close; I'm not sure if I'll have time to review it on Monday
> >> before the pull.
> >>
> >> Dave
> >>
> > Yes, it is.
> > Then we may wait for next merge window after fully review:)
> > 
> >>>> Thanks.
> >>>>
> >>>>> rdma://192.168.1.100 [VM] --listen-address 0.0.0.0
> >>>>> qemu+tcp://192.168.1.100/system --verbose
> >>>>>
> >>>>> v2 -> v3:
> >>>>>     create multifd ops for both tcp and rdma
> >>>>>     do not export rdma to avoid multifd code in mess
> >>>>>     fix build issue for non-rdma
> >>>>>     fix some codestyle and buggy code
> >>>>>
> >>>>> Chuan Zheng (18):
> >>>>>   migration/rdma: add the 'migrate_use_rdma_pin_all' function
> >>>>>   migration/rdma: judge whether or not the RDMA is used for migration
> >>>>>   migration/rdma: create multifd_setup_ops for Tx/Rx thread
> >>>>>   migration/rdma: add multifd_setup_ops for rdma
> >>>>>   migration/rdma: do not need sync main for rdma
> >>>>>   migration/rdma: export MultiFDSendParams/MultiFDRecvParams
> >>>>>   migration/rdma: add rdma field into multifd send/recv param
> >>>>>   migration/rdma: export getQIOChannel to get QIOchannel in rdma
> >>>>>   migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
> >>>>>   migration/rdma: Create the multifd recv channels for RDMA
> >>>>>   migration/rdma: record host_port for multifd RDMA
> >>>>>   migration/rdma: Create the multifd send channels for RDMA
> >>>>>   migration/rdma: Add the function for dynamic page registration
> >>>>>   migration/rdma: register memory for multifd RDMA channels
> >>>>>   migration/rdma: only register the memory for multifd channels
> >>>>>   migration/rdma: add rdma_channel into Migrationstate field
> >>>>>   migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all
> >>>>>     mode
> >>>>>   migration/rdma: RDMA cleanup for multifd migration
> >>>>>
> >>>>>  migration/migration.c |  24 +++
> >>>>>  migration/migration.h |  11 ++
> >>>>>  migration/multifd.c   |  97 +++++++++-
> >>>>>  migration/multifd.h   |  24 +++
> >>>>>  migration/qemu-file.c |   5 +
> >>>>>  migration/qemu-file.h |   1 +
> >>>>>  migration/rdma.c      | 503
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++-
> >>>>>  7 files changed, 653 insertions(+), 12 deletions(-)
> >>>>>
> >>>>> --
> >>>>> 1.8.3.1
> >>>>
> >>>> .
> >>>>
> >>>
> >>> -- 
> >>> Regards.
> >>> Chuan
> >>>
> > 
> 
> -- 
> Regards.
> Chuan
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]