lilypond-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Syntax change proposal:


From: David Kastrup
Subject: Re: Syntax change proposal:
Date: Mon, 16 Jul 2012 10:18:39 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1.50 (gnu/linux)

Graham Percival <address@hidden> writes:

> On Mon, Jul 16, 2012 at 02:02:31AM +0200, David Kastrup wrote:
>> 
>> One really ugly problem is interpreting things like "4.".  Looks like a
>> duration, but then we have
>> input/regression/dynamics-broken-hairpin.ly:  line-width = 4.\cm
>
> I am against making a change like this outwith[1] of GLISS.  It
> could involve a lot of user pain (and documentation-editing
> pain!), so I think it's important to at least pretend[2] to have
> good user consultation beforehand.

I disagree with "a lot of user pain".

> [1] that's the only part of Scottish vocabular or accent that I've
> been able to pick up, so I'm sticking to it.
>
> [2] the phrase "at least pretend" doesn't mean that we will _only_
> be pretending.
>
>
> As far as the actual proposal goes, I'm generally in favor.

Well, it probably is not helping my case that I am putting this proposal
forward in isolation which does not actually reflect the tradeoffs that
are involved here.

The key point is being able to employ user-definable functions in a wide
variety of situations.  We currently have the situation that

var=-.

#(display var)

\layout { var=-. #(display var) }

actually displays

#<Prob: Music C++: Music((articulation-type . staccato) (origin . #<location 
/tmp/xxx.ly:1:5>))((display-methods #<procedure #f (event parser)>) (name . 
ArticulationEvent) (types general-music post-event event articulation-event 
script-event)) >
0.0

meaning that in a toplevel assignment, -. is a staccato articulation,
and inside of a layout block, it is the number 0.0.  Weird enough as it
is, but if I increasingly call Scheme functions for doing things, the
same different interpretations apply for their arguments, and by now we
have several music functions/commands like \accidentalStyle or \tempo
that make sense in output definitions as well.

It would seem like an option to switch into music mode while scanning
the arguments of a music function.  However, when to switch back again?
For something like

\displayMusic c4

I can't start interpreting the command before making sure that it is not
followed by further things like a dot making the note value longer.  By
the time I have taken a look at the lookahead token in music mode and
decided that it does not belong there anyway, I am no longer free to
scan the underlying characters in a different mode, possibly generating
an entirely different token.

One problem I currently struggle with is supporting something like
\tempo 4. = 200
while trying to recognize 4. in the scanner already, obliterating the
need for parser lookahead.

The less mode-dependent the interpretation of a character sequence as
tokens is, the fewer surprises the user will get when using music and
scheme functions across different modes.

Another thing in that area that I am trying to tackle is making event
functions and music functions equivalent, so that things like

c\tweak #'color #'red \p

work just like

x=\tweak #'color #'red \p

{ c\x }

already does, and not require writing

c-\tweak #'color #'red \p

Again, this is an artifact of music being only recognizable with
lookahead, requiring premature decisions in some circumstances.

I am able to trick around some of these limitations, but the cost in
parser complexity is rather hefty, and it does not make for code that is
easy to debug or maintain.

While I agree that it does not seem worth artifically introducing
incompatibilities with previous behavior, it is likely going to be be
the least-total-pain alternative for both users and developers.

The problem with turning things like this into a GLISS aspect is that
they emerge as consequences of ongoing incremental work on extending the
lexer/parser in connection with the limitations of the table-driven
lexer and parser which have hard constraints on the kind of constructs
they can and cannot support.  So apart from the general observation "our
syntax is too complicate for our own and the lexer/parser's good", the
concrete changes that are required as consequences of making other parts
of the syntax more consistent are hard to plan ahead.  Usually they
arise after staring for days on parser state tables and figuring out the
answer to "_now_ what happened?" and asking myself "is the reason for
that happening worth the trouble?".

And things like letting -. be the number 0.0 in some contexts (actually,
why not -0.0 ?) don't strike me as the thing that warrants lots of pain
to support.  Similarly with 4. being interpreted as a duration of 3/8 or
as a real of value 4.0.  Being able to reduce the mode-dependency of at
least the tokenization helps making the language more predictable.

But as we are currently really at the limit of what parser/lexer can be
usefully made to support, those limits structure the desirability and
also urgency of some changes, and thus, short from a complete redesign
from ground up, are not really readily plannable ahead as one chunk of
GLISS.

I agree that "let's change the syntax incompatibly, it may be useful" is
not a particularly convincing argument.  Neither is "just trust me, this
will hurt, but be better in the long run".  And "I have worked on
hundreds of related changes with a lot of consequences as one large
package for months with the following new restrictions and
possibilities, let's say yes or no" is also problematic.  And often
"can't we continue to support xxx" has the answer "theoretically
possible at considerable cost to the parser and with hard to understand
resolution of ambiguities".

I am pretty good at parser work, but I am not comfortable writing code
that I am at a loss to explain to others.  And I often write stuff up to
the proof of concept stage, and then decide to scrap it because it is
just too shaky.

I don't have an idea how to bring in this sort of experience and
expertise into a fair and democratic decision making process.

Isolated things of the "do we really need this?" category, like this
one, are somewhat possible to argue on "their own merits", but it is
hard to argue them on behalf of the "great scheme of things" where they
matter as well, when the exact impact would only be estimable once an
implementation not relying on the pesky "feature" is available and can
be compared in its complexity to other approaches which are also far
from simple.

The alternative is to just wrap them in a proposal of a larger change
and say "oh, by the way, this will also have the following 15 little
incompatibilities which were less convenient to work around than
warranted" and risk a big fat "no".

I arrive at some of my opinions mostly after discussing them in-depth
with Bison and Flex, and I don't really have much of a clue how to
integrate them in a balanced development process.  And it is not like
Bison and Flex are easily overruled once they have made up their mind
about something.

-- 
David Kastrup




reply via email to

[Prev in Thread] Current Thread [Next in Thread]