qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 9/9] qemu-file: Account for rate_limit usage on qemu_fflush()


From: Daniel P . Berrangé
Subject: Re: [PATCH 9/9] qemu-file: Account for rate_limit usage on qemu_fflush()
Date: Fri, 5 May 2023 08:19:04 +0100
User-agent: Mutt/2.2.9 (2022-11-12)

On Thu, May 04, 2023 at 07:22:25PM +0200, Juan Quintela wrote:
> Daniel P. Berrangé <berrange@redhat.com> wrote:
> > On Thu, May 04, 2023 at 01:38:41PM +0200, Juan Quintela wrote:
> >> That is the moment we know we have transferred something.
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> ---
> >>  migration/qemu-file.c | 7 +++----
> >>  1 file changed, 3 insertions(+), 4 deletions(-)
> >> 
> >> diff --git a/migration/qemu-file.c b/migration/qemu-file.c
> >> index ddebfac847..309b4c56f4 100644
> >> --- a/migration/qemu-file.c
> >> +++ b/migration/qemu-file.c
> >> @@ -300,7 +300,9 @@ void qemu_fflush(QEMUFile *f)
> >>                                     &local_error) < 0) {
> >>              qemu_file_set_error_obj(f, -EIO, local_error);
> >>          } else {
> >> -            f->total_transferred += iov_size(f->iov, f->iovcnt);
> >> +            uint64_t size = iov_size(f->iov, f->iovcnt);
> >> +            qemu_file_acct_rate_limit(f, size);
> >> +            f->total_transferred += size;
> >>          }
> >>  
> >>          qemu_iovec_release_ram(f);
> >> @@ -527,7 +529,6 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t 
> >> *buf, size_t size,
> >>          return;
> >>      }
> >>  
> >> -    f->rate_limit_used += size;
> >>      add_to_iovec(f, buf, size, may_free);
> >>  }
> >>  
> >> @@ -545,7 +546,6 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, 
> >> size_t size)
> >>              l = size;
> >>          }
> >>          memcpy(f->buf + f->buf_index, buf, l);
> >> -        f->rate_limit_used += l;
> >>          add_buf_to_iovec(f, l);
> >>          if (qemu_file_get_error(f)) {
> >>              break;
> >> @@ -562,7 +562,6 @@ void qemu_put_byte(QEMUFile *f, int v)
> >>      }
> >>  
> >>      f->buf[f->buf_index] = v;
> >> -    f->rate_limit_used++;
> >>      add_buf_to_iovec(f, 1);
> >>  }
> >
> > This has a slight semantic behavioural change.
> 
> Yeap.
> 
> See the answer to Peter.  But three things came to mind:
> 
> a - the size of the buffer is small (between 32KB and 256KB depending
>     how you count it).  So we are going to call qemu_fflush() really
>     soon.
> 
> b - We are using this value to calculate how much we can send through
>     the wire.  Here we are saything how much we have accepted to send.
> 
> c - When using multifd the number of bytes that we send through the qemu
>     file is even smaller. migration-test multifd test send 300MB of data
>     through multifd channels and around 300KB on the qemu_file channel.
> 
> 
> >
> > By accounting for rate limit in the qemu_put functions, we ensure
> > that we stop growing the iovec when rate limiting activates.
> >
> > If we only apply rate limit in the the flush function, that will
> > let the  f->iov continue to accumulate buffers, while we have
> > rate limited the actual transfer.
> 
> 256KB maximum.  Our accounting has bigger errors than that.
> 
> 
> > This makes me uneasy - it feels like a bad idea to continue to
> > accumulate buffers if we're not ready to send them
> 
> I still think that the change is correct.  But as you and Peter have
> concerns about it, I will think a bit more about it.

If Peter's calculations are correct, then I don't have any objection,
as that's a small overhead.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]