qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v9 7/7] multifd: Implement zero copy write in multifd migrati


From: Leonardo Bras Soares Passos
Subject: Re: [PATCH v9 7/7] multifd: Implement zero copy write in multifd migration (multifd-zero-copy)
Date: Tue, 26 Apr 2022 19:58:09 -0300

Hello Peter, thanks for helping!

On Tue, Apr 26, 2022 at 1:02 PM Peter Xu <peterx@redhat.com> wrote:
>
> Leo,
>
> This patch looks mostly good to me, a few nitpicks below.
>
> On Mon, Apr 25, 2022 at 06:50:56PM -0300, Leonardo Bras wrote:
[...]
> >      }
> > +
> > +    /*
> > +     * When using zero-copy, it's necessary to flush after each iteration 
> > to
> > +     * make sure pages from earlier iterations don't end up replacing newer
> > +     * pages.
> > +     */
> > +    flush_zero_copy = migrate_use_zero_copy_send();
>
> Would you mind inline it if it's only used once?

It's not obvious in the diff, but this is used in a loop bellow, so I inserted
the variable to avoid calling migrate_use_zero_copy_send() for each
multifd channel.

>
> It's great to have that comment, but IMHO it could be more explicit, even
> marking a TODO showing that maybe we could do better in the future:
>
>   /*
>    * When using zero-copy, it's necessary to flush the pages before any of
>    * the pages can be sent again, so we'll make sure the new version of the
>    * pages will always arrive _later_ than the old pages.
>    *
>    * Currently we achieve this by flushing the zero-page requested writes
>    * per ram iteration, but in the future we could potentially optimize it
>    * to be less frequent, e.g. only after we finished one whole scanning of
>    * all the dirty bitmaps.
>    */
>

Thanks! I will insert that in the next version.

The thing here is that I was under the impression an iteration was equivalent to
a whole scanning of all the dirty bitmaps. I see now that it may not
be the case.

[...]
> > @@ -688,10 +708,9 @@ static void *multifd_send_thread(void *opaque)
> >                  p->iov[0].iov_base = p->packet;
> >              }
> >
> > -            ret = qio_channel_writev_all(p->c, p->iov + iov_offset,
> > -                                         p->iovs_num - iov_offset,
> > -                                         &local_err);
> > -
> > +            ret = qio_channel_writev_full_all(p->c, p->iov + iov_offset,
> > +                                              p->iovs_num - iov_offset, 
> > NULL,
> > +                                              0, p->write_flags, 
> > &local_err);
>
> I kind of agree with Dan in previous patch - this iov_offset is confusing,
> better drop it.

Sure, fixed for v10.

>
[...]
> --
> Peter Xu
>

Best regards,
Leo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]