[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: midi articulation

From: Carl Sorensen
Subject: Re: midi articulation
Date: Fri, 25 Mar 2016 02:40:34 +0000
User-agent: Microsoft-MacOutlook/

On 3/24/16 8:14 PM, "Daniel Birns" <address@hidden> wrote:

>Thanks for your kind note.
>Of course, I completely understand the protectiveness. That¹s how it must
>be. Before I do anything, I¹d like to discuss my thoughts on the matter.
>Inside vs. Outside
>OUTSIDE: The Unix idea is to have many small, testable apps that are
>designed to be able to work together. In this view, one should have an
>Engraver (aka lilypond) as a single app that doesn¹t try to do anything
>else. In addition, the lilypond syntax becomes the key interface, and
>lilypond can be the Engraver as well as the Validator. Midi should be a
>separate app that reads the same source and generates midi.
>Of course, there are other ways to layer this, in the unix model. Perhaps
>there should be a parser which generates an internal format, and then
>lilypond could read that, and the midi generator could also read the
>internal format. 
>Perhaps lilypond already has a suitable internal-format, post-parsed
>format that would be more suitable for a midi-generator.

I believe that LilyPond already has the built-in hooks to create midi
output.  There are performers (similar to engravers) whose job it is to
create the midi.  But I think that we have not spent a lot of time to make
that the midi created is high quality, in terms of respecting
articulations and other expressive indications.  Hence

>INSIDE: Generating midi out to be a great deal easier in lilypond because
>it has already parsed everything, ought to know, at any point in time,
>all the information necessary to generate better midi output. Currently,
>it¹s missing many things, like note-durations, and so on. I wonder why
>that is? I¹m suspecting there¹s a fundamental reason, and that it¹s
>because the information is split up among all the various engravers?
>Midi is a big subject, and keeping midi generation inside lilypond may
>generate fears, probably well-founded, of increasing size and complexity.

I don't think there is any particular concern about generating midi.  I
think any concern that exists would likely be about having poor-quality,
unmaintainable code for the midi generators.  Or using hacks rather than
solid architecture.

>Midi Sounds
>I¹m no midi expert, but I¹ve used it over the years. My impression is
>that tools like Apple Logic, Sibelius, and so on, provide their own
>sounds, which are unrelated, in particular, to midi, which only gives a
>slight clue to the desired sound. To give a reasonable midi result, I
>believe we must go that route: provide a sound library, and a player. A
>user would then be able to write a lilypond score, and get a reasonable
>audio playback of that score. We could generate midi as we have always
>done, but the midi would have much better articulation and dynamics than
>it currently does.

I really don't think this is the right approach.  LilyPond should not be
developing sound libraries or players.  There already exist high-quality
sound libraries, and full-featured midi players.  Why should LilyPond
reinvent the wheel?  What are the weaknesses and/or limitations of, for
example, Qsynth or Timidity++?

I'm quite on board with recommending a particular midi player and/or sound
font.  But I don't see how creating a new synthesizer is necessary (or
desirable) to improve articulation and dynamics.  While I'm not an expert,
I would expect that the appropriate MIDI commands can be embedded in a
MIDI file created by LilyPond, and we just need to make sure a known good
sound library and player are available.

Why do you think we need a new player?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]