[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] QMP: Spec: Private Extensions support
Re: [Qemu-devel] [PATCH] QMP: Spec: Private Extensions support
Mon, 22 Feb 2010 14:06:58 +0100
Gnus/5.13 (Gnus v5.13) Emacs/23.1 (gnu/linux)
Anthony Liguori <address@hidden> writes:
> On 02/19/2010 07:04 AM, Markus Armbruster wrote:
>> Anthony Liguori<address@hidden> writes:
>>> We need a bit more than just this. Here's my suggestion:
>> I think this is much more restrictive than necessary. Unnecessarily
>> restrictive rules are more likely to be ignored, and we don't want that.
>> Details below.
> I disagree. The point of QMP is to allow third party tools to manage
> qemu in a robust way. Compatibility is a key part of this.
> Downstream QMP extension is a bad thing. It means management tools
> have to have a lot of special logic to deal with different
> downstreams. I believe we need to strongly discourage downstreams
> from making protocol changes while leaving them enough wiggle room to
> make the changes when they have no other choice.
Wearing my upstream hat: Yes, downstream extensions are undesirable.
Yes, it makes sense to discourage them. And yes, downstreams will be
occasionally compelled to add extensions anyway.
The only truly normative bit in the whole text is our promise to reserve
the "__" prefix for vendors. The rest is merely advice. We have no
power whatsoever to make downstream take it. Upstream has to lead by
argument and example.
Putting advice forward in the form of arbitrary "rules" is a good way to
ensure it's ignored. Advice needs a rationale. The stronger the
recommendation, the more convincing its rationale needs to be.
We already established rules for protocol evolution: we won't change
existing commands, arguments, and so forth; we won't add new mandatory
arguments; we may add optional arguments, new errors, ... If these
rules are good enough for us, why aren't they good enough for
downstream? I don't think we can expect downstream to follow rules we
don't intend to follow ourselves.
What follows is merely elaborating on these maxims.
>>> 6. Downstream modification of QMP
>>> We strongly recommend that downstream consumers of QEMU do _not_
>>> modify the behaviour of any QMP command or introduce new QMP commands.
>>> This is important to allow management tools to support both upstream and
>>> downstream versions of QMP without special logic.
>>> However, we recognize that it is sometimes impossible for downstreams to
>>> avoid modifying QMP. If this happens, the following rules should be
>> Bad line break.
>>> to attempt to preserve long term compatibility and interoperability.
>> Better move your items 5-6) here, and make them talk about "names", not
>> just commands. Items 1-4) make much more sense with the name space
>> issue settled.
> Good suggestion.
>>> 1) Only introduce new commands. If you want to change an existing command,
>>> leave the old command in place and introduce a new one with the new
>> Extending an existing command in a compatible way should be fine. An
>> extension is "compatible" if
>> 1. it adds only optional arguments, and
>> 2. it behaves differently only when new optional arguments are supplied.
> From an implementation perspective, it's very easy to modify an
> existing command, call it a new name, then add a new command with the
> old name that passes only the compatible parameters. So I have a hard
> time believing this is really a burden for downstreams.
> Think about it from an end-to-end perspective. If I'm writing a
> client library, and I expose a function qmp_commit(session, disk,
> force=false), and the QMP protocol introduces a vendor extension to
> qmp_commit(), I now need to change the qmp_commit() interface in my
> library to support that new parameter. Except since this is a vendor
> parameter, now every caller has to have knowledge about dealing with
> different vendors.
> On the other handle, a qmp_rhel_commit() command happily lives in a
> separate namespace. In fact, in a well written client library, it
> could easily be a plugin. The normal library consumer never has to
> think about vendor functions unless they explicitly need a function
> that only lives in a vendor namespace.
With a bit more precision: consider monitor command "commit", available
in QMP (not yet true, but that doesn't matter). Say we have a client
library that provides function qmp_commit(), which is merely a
straightforward marshalling wrapper around the QMP command.
1. The client library and its users continues to work unchanged even
when the server (QEMU) adds vendor-specific optional arguments.
2. The client library may elect to expose vendor-specific arguments in
qmp_commit() to its users.
3. The client library may elect to provide a vendor-specific wrapper
qmp_<vendor>_commit() around QMP command commit that exposes
vendor-specific arguments. In fact, that could easily be a plugin.
4. Regardless of whether the client library does 2., 3. or neither, its
unmodified users continue to work unchanged.
I don't think we have convincing rationale for a "rule" here.
>>> 2) Only introduce new asynchronous messages. Do not change an
>>> existing message.
>> Why outlaw adding new data members? What would that break?
> It's the same argument as above. If you add new data members, you
> expose vendor extensions to the core protocol which has to propagate
> all the way down to the end user.
I don't think so.
As everywhere else, new members are "opt-in". Any robust QMP client
already has to ignore members it doesn't know, to have a chance to work
with multiple upstream versions.
In fact, I can't see any qualitative difference between dealing with
multiple upstream versions and dealing with vendor-specific extensions
in different downstream versions.
Back to events. Say I want to provide some additional information with
an existing event. Admittedly contrived example: add a simple error
code to BLOCK_IO_ERROR, to further explain the nature of the error. Two
options: 1. add it to the existing event, and 2. create a new event and
send it in addition to the old one (I find that clumsy). Non-option,
because it breaks clients that know only BLOCK_IO_ERROR: create a new
event and send it instead of BLOCK_IO_ERROR
Say I'm working upstream. I'd surely do 1.
Say I'm working downstream. Your "rule" quite arbitrarily demands I do
2. instead. I say "arbitrarily", because I find the rationale for it
>>> 3) Only introduce new error types and only use those error types in
>>> new commands.
>>> New commands can use existing error types, but you should never add a
>>> new error type
>>> to an existing command.
>> A client that can't cope with unknown error types will only ever work on
>> a single upstream version. Such clients will go the way of the Dodo
>> real quick.
>> With that established: why is adding new error types bad?
> Same as above. The core API should not behave any differently in a
> downstream. Clients who only use the core API should not have to
> adapt their behaviour for downstreams.
We explicitly require clients to cope with new errors (qmp-spec.txt
section 5 Compatibility Considerations). Why reserve the privilege to
use that flexibility to upstream?
> With respect to ignoring error types, I disagree. There's certainly
> going to be a class (perhaps the majority) of clients that treat
> errors roughly the same. However, there is going to be a class of
> clients that very clearly handle errors in special ways depending on
> the error type.
And even they will need a fallback for unknown errors, to avoid tight
coupling to a specific upstream version.
>>> 4) Do not introduce any new capabilities. Capabilities directly
>>> affect a client's ability to
>>> parse the protocol correctly and new capabilities can not be supported
>>> in an upstream
>>> compatible way. Please work capabilities through upstream first.
>> This is factually incorrect. Clients that don't recognize new
>> capabilities just ignore them, and continue to work as before.
> I think you misinterpreted my statement. Capabilities absolute affect
> the client's ability to parse the protocol if you enable them. If you
> support downstream specific capabilities, then knowledge of downstream
> needs to be very deep within any client library. That's an
> unnecessary level of complexity IMHO.
Capabilities are "opt-in".
Supporting downstream-specific new capabilities is no different to
supporting new capabilities added by upstream. Why privilege upstream?
> Capabilities will be rare. I doubt a downstream would ever reasonably
> need to add a capability that isn't upstream.
I quite agree that new capabilities are the least desirable of all
> N.B. backporting upstream capabilities/commands/error messages is not
> what we're talking about here. All of those things are perfectly fine
> without using a vendor namespace. We're specifically talking about
> adding new features that are not present upstream. I think it's
> completely reasonable to suggest that capabilities should be done
> upstream first.
"Upstream first" is sensible policy, and as with all sensible policies,
it's practical policy in most cases (provided you make an honest
effort), but not in all cases.