qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Date: Fri, 15 Sep 2017 15:32:39 +0100
User-agent: Mutt/1.8.3 (2017-05-23)

On Fri, Sep 15, 2017 at 03:29:46PM +0100, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrange (address@hidden) wrote:
> > On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > > * Daniel P. Berrange (address@hidden) wrote:
> > > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > > * Daniel P. Berrange (address@hidden) wrote:
> > > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau 
> > > > > > > > > wrote:
> > > > > > > > > > There should be a limit in the number of requests the 
> > > > > > > > > > thread can
> > > > > > > > > > queue. Before the patch, the limit was enforced by system 
> > > > > > > > > > socket
> > > > > > > > > > buffering I think. Now, should oob commands still be 
> > > > > > > > > > processed even if
> > > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > > 
> > > > > > > > > I agree.
> > > > > > > > > 
> > > > > > > > > Memory usage must be bounded.  The number of requests is less 
> > > > > > > > > important
> > > > > > > > > than the amount of memory consumed by them.
> > > > > > > > > 
> > > > > > > > > Existing QMP clients that send multiple QMP commands without 
> > > > > > > > > waiting for
> > > > > > > > > replies need to rethink their strategy because OOB commands 
> > > > > > > > > cannot be
> > > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > > 
> > > > > > > > Thanks for pointing out this.  Yes the memory usage problem is 
> > > > > > > > valid,
> > > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > > Control" section of that long reply).  Hopefully this series 
> > > > > > > > basically
> > > > > > > > can work from design prospective, then I'll add this flow 
> > > > > > > > control in
> > > > > > > > next version.
> > > > > > > > 
> > > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > > provided a few options, but the one I prefer most is that we 
> > > > > > > > don't
> > > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > > However, I would like it not queued, but a direct reply (after 
> > > > > > > > all,
> > > > > > > > it's an event, and we should not need to care much on ordering 
> > > > > > > > of it).
> > > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > > 
> > > > > > > > I think I also missed at least a unit test for this new 
> > > > > > > > interface.
> > > > > > > > Again, I'll add it after the whole idea is proved solid.  
> > > > > > > > Thanks,
> > > > > > > 
> > > > > > > Another solution: the server reports available receive buffer 
> > > > > > > space to
> > > > > > > the client.  The server only guarantees immediate OOB processing 
> > > > > > > when
> > > > > > > the client stays within the receive buffer size.
> > > > > > > 
> > > > > > > Clients wishing to take advantage of OOB must query the receive 
> > > > > > > buffer
> > > > > > > size and make sure to leave enough room.
> > > > > > 
> > > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > > and of course it is inherantly racy.
> > > > > > 
> > > > > > I would just have QEMU emit an event when it pausing processing of 
> > > > > > the
> > > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > > of the last queued command, the client will know which (if any) of
> > > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > > it restarts reading.
> > > > > 
> > > > > Hmm and now we're implementing flow control!
> > > > > 
> > > > > a) What exactly is the current semantics/buffer sizes?
> > > > > b) When do clients send multiple QMP commands on one channel without
> > > > > waiting for the response to the previous command?
> > > > > c) Would one queue entry for each class of commands/channel work
> > > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > > 
> > > > I do wonder if we need to worry about request limiting at all from the
> > > > client side.  For non-OOB commands clients will wait for a reply before
> > > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > > 
> > > > OOB commands are supposed to be things which can be handled quickly
> > > > without blocking, so even if a client sent several commands at once
> > > > without waiting for replies, they're going to be processed quickly,
> > > > so whether we temporarily block reading off the wire is a minor
> > > > detail.
> > > 
> > > Lets just define it so that it can't - you send an OOB command and wait
> > > for it's response before sending another on that channel.
> > > 
> > > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > > pretend that there's an infinite queue and nothing bad would happen from
> > > > the app's POV.
> > > 
> > > Can you justify 10 as opposed to 1?
> > 
> > Semantically I don't think it makes a difference if the OOB commands are
> > being processed sequentially by their thread. A >1 length queue would only
> > matter for non-OOB commands if an app was filling the pipeline with non-OOB
> > requests, as then that could block reading of OOB commands. 
> 
> But can't we just tell the app not to?

Yes, a sensible app would not do that. So this feels like mostly a documentation
problem.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]