[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Re: Limiting parallelism using futures, parallel forms and, fibers (

From: Zelphir Kaltstahl
Subject: Re: Re: Limiting parallelism using futures, parallel forms and, fibers (Chris Vine)
Date: Sat, 11 Jan 2020 15:07:42 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2

On 1/10/20 6:00 PM, address@hidden wrote:
> Message: 1
> Date: Fri, 10 Jan 2020 16:08:25 +0000
> From: Chris Vine <address@hidden>
> To: Zelphir Kaltstahl <address@hidden>
> Cc: Guile User <address@hidden>
> Subject: Re: Limiting parallelism using futures, parallel forms and
>       fibers
> Message-ID: <address@hidden>
> Content-Type: text/plain; charset=US-ASCII
> On Fri, 10 Jan 2020 01:43:18 +0100
> Zelphir Kaltstahl <address@hidden> wrote:
>> Hi Chris!
>> On 1/8/20 12:44 PM, Chris Vine wrote:
>>> On Wed, 8 Jan 2020 08:56:11 +0100
>>> Zelphir Kaltstahl <address@hidden> wrote:
>>> [snip]
>>>> So my questions are:
>>>> - Is there a default / recommended way to limit parallelism for
>>>> recursive calls to parallel forms?
>>>> - Is there a better way than a global counter with locking, to limit the
>>>> number of futures created during recursive calls? I would dislike very
>>>> much to have to do something like global state + mutex.
>>>> - What do you recommend in general to solve this?
>>> I think you have it wrong, and that futures use a global queue and a
>>> global set of worker threads.  I don't see how futures could work
>>> without at least a global set of worker threads.  Have a look at the
>>> futures source code.
>> So far I did not write any parallel code in my project. I am only
>> exploring possibilities, finding out what I should be using. I also
>> could not think of a good way to limit the number of threads to anything
>> less than the (current-processor-count) value, but thought maybe someone
>> knows something I did not see or read about Guile's futures yet : )
> I still don't understand why you think you want to do that, given the
> aims you set out in your earlier posts.  Why do you want to spawn less
> than (processor-count - 1) threads for your algorithm, which is what
> futures would use?
> But if you really want to do that, and you don't want to use a
> stand-alone thread pool, the parallelism keyword of fibers seems to do
> what you want.

Hi Chris!

Yes, it might not be so clear a reason. At the beginning of implementing
the algorithm I looked at the Scikit-learn interface for decision trees
and implemented the functionality for the keyword arguments, which that
interface provides. They also offer a keyword for many models, which
allows you to set the number of threads. I think they call it "jobs",
"tasks", "n_jobs" or "n_tasks" or something like that. The reason why I
think this is useful is the following:

If you are running a training phase for a model of a lot of data, this
can take a while. You might want to still be able to use your machine
for other purposes, while the model is being trained / fitted to the
data. However, if I go for maximum parallelism in my algorithm, then it
could easily happen, that all cores are busy. That would impact the
ability to do other things, while the algorithm is running. I would like
the user to be able to make a decision about how many cores they want to
use for the algorithm. This is wishful thinking, but maybe it can be
done and I don't want to abandon the idea too early. Perhaps I will
abandon it later, if it becomes too difficult to do for me.

With the fibers keyword argument #:parallelism, the only thing I am not
sure about is the following: Whether the number of threads is guaranteed
to be at maximum the specified number, if in your fibers, you create a
new schedule using (run-fibers …) and spawn new fibers in there. This
would be the case with my recursive algorithm for the decision tree.
With parallel forms or simple futures it seems to be the case, that more
threads would be used, when recursively creating more expressions to
evaluate in parallel.

Thanks for linking to your async thread pool implementation. It is not
out of the question, that it is actually the solution for the problem.

Currently I am trying to build a simple thread pool using the
#:parallelism keyword argument of (run-fibers …). What I imagine there
is the following:

There is one call to (run-fibers …), which specifies the #:parallelism
keyword and then inside the lambda given to run-fibers, I need to spawn
all fibers, which shall perform the actual work, so that they are
limited by the parallelism of that scheduler. To do that, I imagine,
that I can have an input and output channel for the scheduler to receive
work on and give back results, which it got from fibers, which it
spawned. For any work received, a new fiber is spawned, which shall
perform the work. The fibers library, as far as I know, handles work
stealing between the fibers of one scheduler and the parallelism keyword
would make sure only so many cores are used.

The problem then is, how I make sure, that the results output by the
(run-fibers …) lambda, will be received and associated with the thing,
that initially sent the work to the "pool". For this I thought about
using some kind of ids. However, ids might not work, because if the
wrong thing calls (get-message …) on the out channel for results, the
right thing will never see the result. First come first served. So then
I thought about sending the "pool" not only the work to be done, but
also a procedure, which acts as callback, so that the result is always
returned to the correct place.

I only started writing the first lines of this attempt yesterday night,
so I don't know at all, if such a thing can work or whether it is a good

I will also need to take a closer look at your thread pool, so that I
know how to use it. Perhaps using that will be easier, than my idea above.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]