fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] improving musical timekeeping


From: address@hidden
Subject: Re: [fluid-dev] improving musical timekeeping
Date: Sat, 8 Feb 2020 21:59:18 +0100
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.4.2

Thank you for your time. I've written my comments inline.

2020.02.08. 15:41 keltezéssel, Tom M. írta:
Here are some thoughts of a software engineer. Not sure what a musician
would say.

First of all, you are right. A "meaningfully long" attack phase will
"delay" note on and thus shorten it. My question: what would be
the use-case of such a "meaningfully long" attack? The only use-case I
can think of is to play a crescendo. And for this, one should use the
Expression CC instead.
I am referring to attacks on the order of 10 ms to a few hundred ms (milliseconds) - be it the short attack of a piano note or a drum hit, or longer attacks of woodwind instruments, strings, choirs, etc.
Why samplers/synthesizers don't compensate for that? I think because
it doesn't work for real-time performance. One would need to delay all
notes by the length of the attack phase. But what if the musician
suddenly switches to another instrument which has an even longer
attack?
My suggested use case was for scored (non-interactive, known-in-advance) music - MIDI playback if you will. Obviously in an interactive scenario (someone playing a keyboard real-time) this function could not work. But in the age of DAWs and music composition software (I'm coming from MuseScore) playback of scored music is a common use-case, which makes me think it might be worth adding this functionality to the underlying synthesizer.
  Even worse: what if the musician uses a custom CC that extends
the attack phase even more? The only way to correct this would be to
figure out the longest attack phase that can ever be produced and
delay all notes and all sound accordingly. But most musicians use
monitors. They want to hear what they play, when they play it. I think
they would be very confused if what they're playing is delayed by a
few seconds.
Let's keep the two use cases separate. For interactive playback, this could not work well, I agree. But for rendering scored music (which is feel is a quite common use case), it would.
It is (a) well known (problem), that MIDI sheets only work well with
the soundbank they have been designed for.
I acknowledge this is the status quo (the situation currently). But are we also saying we are not interested in pushing the boundaries of technology to improve the situation? Are we going to be frozen into SoundFont 2.04 forever?
  This is due to various
degrees of freedom a soundbank provides (volume, ADSR env, cut-off
filters, custom CC automation). So, if you really have instruments
with "meaningfully long" attacks, I'm afraid you're required to adjust
your MIDI sheet(s) manually.
So if I am scoring orchestral/choir music, and would like to play it back with reasonable musical timekeeping, you are sentencing me to eternal toiling with manually adjusting note times, just to arrive at a baseline that would seem to me to be quite possible to get done automatically, given there is infrastructure developed to support this. I am not familiar with these customization possibilities you mention - haven't read the SoundFont spefication yet -, but my use case would be acoustic/orchestral instrument sounds, where little if any artificial manipulation would be intended to be added by the composer. Everything should sound as natural as possible instead.

- HuBandiT




reply via email to

[Prev in Thread] Current Thread [Next in Thread]