[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: articulations on EventChord

From: Janek Warchoł
Subject: Re: articulations on EventChord
Date: Wed, 8 Feb 2012 16:00:41 +0100

2012/2/8 David Kastrup <address@hidden>:
> LilyPond's typesetting does not act on music expressions and music
> events.  It acts exclusively on stream events.  It is the act of
> iterators to convert a music expression into a sequence of stream events
> played in time order.
> The EventChord iterator is pretty simple: it just takes its "elements"
> field when its time comes up, turns every member into a StreamEvent and
> plays that through the typesetting process.  The parser currently
> appends all postevents belonging to a chord at the end of "elements",
> and thus they get played at the same point of time as the elements of
> the chord.  dd to this design, you can add per-chord articulations or
> postevents or even assemble chords with a common stem by using parallel
> music providing additional notes/events: the typesetter does not see a
> chord structure or postevents belonging to a chord, it just sees a
> number of events occuring at the same point of time in a Voice context.
> So all one needs to do is let the EventChord iterator play articulations
> after elements, and then adding to articulations in EventChord is
> equivalent to adding them to elements (except in cases where the order
> of events matters).

Ah, so when a music function encounters an EventChord, it won't have
to find its elements?  It will simply apply articulations to whatever
it received, be it NoteEvent or EventChord, and don't worry about the
details?  Seems perfect!

> Feel free to add this information into CG when you find a nice place...

Pushed as 1ba879b62380c43246fa74e7e2b80b6fa0fde754
Many thanks!

reply via email to

[Prev in Thread] Current Thread [Next in Thread]