[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Parallelize rendering using openMP

From: Marcus Weseloh
Subject: Re: [fluid-dev] Parallelize rendering using openMP
Date: Thu, 19 Apr 2018 23:14:13 +0200


2018-04-16 12:31 GMT+02:00 Ceresa Jean-Jacques <address@hidden>:

>At least on my machine and with the setup I use, the effects take up a large proportion of processing time.

Please at the occasion, if the hardware you use is dedicated to a stand alone synthesizer, could you run the profile commands and return the results (with CPU model) ?.

Here are the profiling results for my embedded system. It's an Allwinner A20 based board (Dual-Core Cortex-A7 ARM), 960MHz CPU Frequency, 1 GB memory. Running Linux 4.14.12 with real-time patches. The whole system has been optimised for low latency, not for high polyphony. So normally Fluidsynth runs with buffer size of 64, buffer count of 2 and only on one core. Fluidsynth has been compiled from the dynamic-sample-loading branch with -Denable-floats=1. And gcc options -O3 and -ffast-math. With this setup, I get the following:
 nVoices| total(%)|voices(%)| reverb(%)|chorus(%)| voice(%)|estimated maxVoices
     100|   80.134|   73.262|     4.941|    1.931|    0.721|              129

If I were to use two cores with buffer size 64 and buffer count 2, it looks like this:

 nVoices| total(%)|voices(%)| reverb(%)|chorus(%)| voice(%)|estimated maxVoices
     100|   46.175|   39.162|     5.131|    1.882|    0.381|              244

And with buffer size 1024, buffer count 2, and two cores, I get this:

 nVoices| total(%)|voices(%)| reverb(%)|chorus(%)| voice(%)|estimated maxVoices
     100|   36.994|   31.137|     4.290|    1.567|    0.305|              308

So it shows that when it comes to performance optimisation, having good measurements are vital. Reverb and chorus take much less CPU cycles than I thought. Thanks for giving us this great profiling interface, JJC!



reply via email to

[Prev in Thread] Current Thread [Next in Thread]