[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 1/2] migration: Fix rdma migration failed
|
From: |
Peter Xu |
|
Subject: |
Re: [PATCH v2 1/2] migration: Fix rdma migration failed |
|
Date: |
Fri, 6 Oct 2023 11:52:10 -0400 |
On Tue, Oct 03, 2023 at 08:57:07PM +0200, Juan Quintela wrote:
> commit c638f66121ce30063fbf68c3eab4d7429cf2b209
> Author: Juan Quintela <quintela@redhat.com>
> Date: Tue Oct 3 20:53:38 2023 +0200
>
> migration: Non multifd migration don't care about multifd flushes
>
> RDMA was having trouble because
> migrate_multifd_flush_after_each_section() can only be true or false,
> but we don't want to send any flush when we are not in multifd
> migration.
>
> CC: Fabiano Rosas <farosas@suse.de
> Reported-by: Li Zhijian <lizhijian@fujitsu.com>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> diff --git a/migration/ram.c b/migration/ram.c
> index e4bfd39f08..716cef6425 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1387,7 +1387,8 @@ static int find_dirty_block(RAMState *rs,
> PageSearchStatus *pss)
> pss->page = 0;
> pss->block = QLIST_NEXT_RCU(pss->block, next);
> if (!pss->block) {
> - if (!migrate_multifd_flush_after_each_section()) {
> + if (migrate_multifd() &&
> + !migrate_multifd_flush_after_each_section()) {
> QEMUFile *f = rs->pss[RAM_CHANNEL_PRECOPY].pss_channel;
> int ret = multifd_send_sync_main(f);
> if (ret < 0) {
> @@ -3064,7 +3065,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> return ret;
> }
>
> - if (!migrate_multifd_flush_after_each_section()) {
> + if (migrate_multifd() && !migrate_multifd_flush_after_each_section()) {
> qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_FLUSH);
> }
>
> @@ -3176,7 +3177,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
> out:
> if (ret >= 0
> && migration_is_setup_or_active(migrate_get_current()->state)) {
> - if (migrate_multifd_flush_after_each_section()) {
> + if (migrate_multifd() && migrate_multifd_flush_after_each_section())
> {
> ret =
> multifd_send_sync_main(rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
> if (ret < 0) {
> return ret;
> @@ -3253,7 +3254,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
> return ret;
> }
>
> - if (!migrate_multifd_flush_after_each_section()) {
> + if (migrate_multifd() && !migrate_multifd_flush_after_each_section()) {
> qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_FLUSH);
> }
> qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> @@ -3760,7 +3761,7 @@ int ram_load_postcopy(QEMUFile *f, int channel)
> break;
> case RAM_SAVE_FLAG_EOS:
> /* normal exit */
> - if (migrate_multifd_flush_after_each_section()) {
> + if (migrate_multifd() &&
> migrate_multifd_flush_after_each_section()) {
> multifd_recv_sync_main();
> }
> break;
> @@ -4038,7 +4039,8 @@ static int ram_load_precopy(QEMUFile *f)
> break;
> case RAM_SAVE_FLAG_EOS:
> /* normal exit */
> - if (migrate_multifd_flush_after_each_section()) {
> + if (migrate_multifd() &&
> + migrate_multifd_flush_after_each_section()) {
> multifd_recv_sync_main();
> }
> break;
Reviewed-by: Peter Xu <peterx@redhat.com>
Did you forget to send this out formally? Even if f1de309792d6656e landed
(which, IMHO, shouldn't..), but IIUC rdma is still broken..
Thanks,
--
Peter Xu