fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Is it possible to immigrate FluidSynth to a DSP(ADSP21xx


From: Aere Greenway
Subject: Re: [fluid-dev] Is it possible to immigrate FluidSynth to a DSP(ADSP21xxx) or an CORTEX-M4 CPU?
Date: Tue, 19 Mar 2013 10:15:59 -0600
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130308 Thunderbird/17.0.4

My response is perhaps a side-note regarding your question. 

In my experience, the better soundfonts are nearly 60 megabytes in size.  The Fluid_R3GM soundfont (which is my favorite of the free soundfonts) is 142 megabytes. 

If you must live with a 32 megabyte limit for soundfonts, the "TimGM6mb" soundfont (distributed with MuseScore) is pretty good, but its French Horn sound is out-of-tune in a fair number of notes.  But at least, this 6-megabyte soundfont doesn't sound overall like a toy musical instrument. 

- Aere

On 03/19/2013 08:17 AM, *simple* wrote:
I plan to immigrate FluidSynth to CORTEX-M4. Could anyone please give me some hints?
 
The Cortex-M4 system I am going to use: 1MByte flash code memory. 192KByte internal SRAM, with FPU, 168MHz, 32MByte external parallel NOR flash (for soundfont). Is it possible to immigrate Fluidsynth to cortex-M4 with these limits (64 polyphony)?
 
I've tried to use optimized cortex-m4 ASM to replace FluidSYnth's lowpass filter, linear interpolation, reverb, chorus. And I also tried to reduced the memory needed down to 240KBytes. At this moment, without concerning the low speed of reading external flash (soundfont), it looks that 192KByte, 168MHz limit would be OK.
 
However, the low speed of reading external flash seems to be a big problem:
 
the flash IC is 90ns, which means you can read 16bits (1 sample point) every 90ns. To make a 64 polyphone rendering, and output at 44.1kHz, for each output sample, you need to read at least 64*2 samples (if linear interpolation is used), this is 128*90ns=11.52us, which is already 50% time of the sample period of 44.1KHz. So, I am afraid I will have to use DMA to do the readings to the audio samples, so that rendering and transfering audio sample data at the same time. This will need some more (maybe 10KBytes) memory buffering space, and increase the complicacy. I wonder if it is really the only choice here to do DMA for transferring audio sample data?
 
Does anybody here have similar question/experience? And what's your solution?


_______________________________________________
fluid-dev mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/fluid-dev


-- 
Sincerely,
Aere

reply via email to

[Prev in Thread] Current Thread [Next in Thread]