qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 04/13] nbd: convert to using I/O channels for


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH v3 04/13] nbd: convert to using I/O channels for actual socket I/O
Date: Tue, 19 Jan 2016 16:52:41 +0000
User-agent: Mutt/1.5.24 (2015-08-30)

On Tue, Jan 19, 2016 at 05:08:12PM +0100, Paolo Bonzini wrote:
> 
> 
> On 19/01/2016 14:09, Daniel P. Berrange wrote:
> > + * This will update @iov so that its head is advanced
> > + * by @skip bytes. To do this, zero or more complete
> > + * elements of @iov will be skipped over. The new head
> > + * of @iov will then have its base & len updated to
> > + * skip the remaining number of bytes. @oldiov will
> > + * hold the original data from the new head of @iov.
> > + */
> > +static void nbd_skip_iov(size_t skip,
> > +                         struct iovec **iov,
> > +                         int *niov,
> > +                         struct iovec *oldiov)
> >  {
> > -    size_t offset = 0;
> > -    int err;
> > -
> > -    if (qemu_in_coroutine()) {
> > -        if (do_read) {
> > -            return qemu_co_recv(fd, buffer, size);
> > +    ssize_t done = 0;
> > +    size_t i;
> > +    for (i = 0; i < *niov; i++) {
> > +        if ((*iov)[i].iov_len <= skip) {
> > +            done += (*iov)[i].iov_len;
> > +            skip -= (*iov)[i].iov_len;
> >          } else {
> > -            return qemu_co_send(fd, buffer, size);
> > +            done += skip;
> > +            oldiov->iov_base = (*iov)[i].iov_base;
> > +            oldiov->iov_len = (*iov)[i].iov_len;
> > +            (*iov)[i].iov_len -= skip;
> > +            (*iov)[i].iov_base += skip;
> > +            *iov = *iov + i;
> > +            *niov = *niov - i;
> > +            break;
> >          }
> >      }
> 
> Can you use iov_discard_front instead?

I didn't know that existed - i'll investigate whether it is
possible to use it.

> > +                /* XXX see about using qio_channel_yield() in
> > +                 * future if we can use normal watches instead
> > +                 * of the AIO handlers */
> > +                qemu_coroutine_yield();
> 
> This breaks dataplane.  The simplest solution is to change
> qio_channel_yield() to just
> 
>     qio_channel_aio_yield(ioc, qemu_get_aio_context(), condition);
> 
> where qio_channel_aio_yield(QIOChannel*, AioContext *, int) is
> implemented by the backends.  The implementation could then use
> aio_set_fd_handler, and it would be almost trivial because it would
> reuse common code just like the one in io/channel-watch.c.
> 
> Websock is the only complicated case.  I think it's fine if you simply
> must not use qio_channel_{,aio_}yield() for websock channels.
> 
> That said, I'm not sure you can even rely on nbd_negotiate_continue to
> run in the main I/O thread.  So, another possibility is to add an
> alternative watch API similar to AioContext's:
> 
>     qio_channel_set_handler(QIOChannel*, AioContext*, void(*in)(void*),
>                             void(*out)(void*), void *opaque)
> 
> whose implementation would again be simple for backends other than
> websock, and probably trivial for backends other than buffer and
> websock.  Again, it wouldn't be a problem to skip buffer and websock for
> the time being (read, until someone actually needs it).

I'll have a look at this and see what I can do. It shouldn't block
merging of this patch I assume, since the current call to
qemu_coroutine_yield() should be fine - it just my comment suggestion
that would not work. So I'll investigate what you suggest here wrt
to AioContext & QIOChannel as a later followup patch.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]