guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Concurrent MVars for Guile


From: Mateusz Kowalczyk
Subject: Re: Concurrent MVars for Guile
Date: Sat, 18 Jan 2014 01:05:10 +0000
User-agent: Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20100101 Thunderbird/24.1.0

On 17/01/14 19:31, Mark H Weaver wrote:
> Hi,
> 
> Mateusz Kowalczyk <address@hidden> writes:
> 
>> First of all, let's quickly outline something that Guilers might not be
>> familiar with: MVars are based on an important property of the GHC's
>> (GHC is the main Haskell compiler) thread scheduling mechanism, that is
>> its fairness policy. GHC uses a round-robin scheduler which means that
>> all threads will eventually get to run (although not with equal CPU
>> slices). A book by Simon Marlow, the person who has for many many years
>> been the main brain behind GHC's RTS says this:
>>
>> “No thread can be blocked indefinitely on an MVar unless another thread
>> holds that MVar indefinitely”.[1]
> 
> "Concurrent Haskell" by Simon Peyton Jones, Andrew Gordon, and Sigbjorn
> Finne, which introduced MVars, states in section 6.3 (Fairness):
> 
>    Assuming that no process holds the MVar indefinitely, it should not
>    be possible for any of the competing processes to be denied access
>    indefinitely.  One way to avoid such indefinite denial would be to
>    specify a FIFO order for processes blocked on an MVar, but that is
>    perhaps too strong.  [...]
> 
> Since our discussion on IRC, I've looked more closely at the
> implementation of Guile's fat mutexes and condition variables, upon
> which my MVars implementation is based.  I will not claim to have fully
> audited the code for correctness, but both of them use FIFO wait queues.
> 
> Here's what experiment shows:
> 
>   scheme@(guile-user)> (use-modules (ice-9 mvars))
>   scheme@(guile-user)> (define mv (new-empty-mvar))
>   scheme@(guile-user)> (define producers
>                          (map (lambda (i)
>                                 (usleep 200000)
>                                 (call-with-new-thread
>                                   (lambda ()
>                                     (let loop ()
>                                       (put-mvar mv i)
>                                       (loop)))))
>                               (iota 10)))
>   scheme@(guile-user)> (map (lambda (_) (take-mvar mv)) (iota 100))
>   $1 = (0 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 
> 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 
> 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8)
> 
> I confess that I do not yet understand why the first thread was able to
> put the first two values into the MVar, and I intend to investigate
> further.  Nonetheless, it's clear that the producers are being treated
> fairly here.
> 
> What about competing consumers?
> 
>   scheme@(guile-user)> (use-modules (ice-9 mvars) (ice-9 threads))
>   scheme@(guile-user)> (define mv (new-empty-mvar))
>   scheme@(guile-user)> (define m (make-mutex))
>   scheme@(guile-user)> (define consumers
>                          (map (lambda (i)
>                                 (usleep 200000)
>                                 (call-with-new-thread
>                                   (lambda ()
>                                     (let loop ()
>                                       (let ((x (take-mvar mv)))
>                                         (with-mutex m
>                                           (format #t "Thread ~a got ~s\n" i 
> x))
>                                         (loop))))))
>                               (iota 10)))
>   scheme@(guile-user)> (for-each (lambda (x) (put-mvar mv x)) (iota 30))
>   Thread 0 got 0
>   Thread 1 got 1
>   Thread 2 got 2
>   Thread 3 got 3
>   Thread 4 got 4
>   Thread 5 got 5
>   Thread 6 got 6
>   Thread 7 got 7
>   Thread 8 got 8
>   Thread 9 got 9
>   Thread 0 got 10
>   Thread 1 got 11
>   Thread 2 got 12
>   Thread 3 got 13
>   Thread 4 got 14
>   Thread 5 got 15
>   Thread 6 got 16
>   Thread 7 got 17
>   Thread 8 got 18
>   Thread 9 got 19
>   Thread 0 got 20
>   Thread 1 got 21
>   Thread 2 got 22
>   Thread 3 got 23
>   Thread 4 got 24
>   Thread 5 got 25
>   Thread 6 got 26
>   Thread 7 got 27
>   Thread 8 got 28
>   Thread 9 got 29
>   scheme@(guile-user)> 
> 
>> Further down we find out the consequences of this guarantee: “A
>> consequence of the fairness implementation is that, when multiple
>> threads are blocked in takeMVar and another thread does a putMVar, only
>> one of the blocked threads becomes unblocked”.
> 
> And indeed, this is the case in this implementation of MVars.  Similarly
> when multiple threads are blocked in put-mvar and another thread does a
> take-mvar.
> 
> My MVars implementation is based on Guile condition variables.  Each
> MVar has two condition variables: one that gets signaled when the MVar
> becomes full, and another that gets signaled when it becomes empty.
> Threads waiting for an MVar wait on the appropriate condition variable
> in the MVar.  Each Guile condition variables contains a FIFO queue of
> threads waiting for it.  When a condition variable is signaled, one
> thread is popped from this queue and woken up.
> 
>> What's Guile's scheduling policy? Can we get a single wakeup and this
>> kind of fairness guarantee?
>>
>> Unfortunately, Mark was not able to tell me what's the case. If this is
>> to make it into core language as Mark suggests, I think it's important
>> that it's first confirmed that the foundation on which MVars are built
>> upon is actually present in Guile. I'm posting on this list in an
>> attempt to find the person who knows how Guile's scheduler works and can
>> answer my questions.
> 
> Guile threads are pthreads, so Guile does not implement its own
> scheduler.  However, the mutexes and condition variables exposed to
> Scheme (and used in this MVars implementation) implement their own FIFO
> wait queues.
> 
>> Of course tests can be written and it can be eyed whether or not the
>> behaviour seems similar but I think a definitive ‘yes, we do this’ or
>> ‘no, we don't do this’ is still necessary.
> 
> Well, again, I haven't done a full audit of the threads code in Guile,
> so I can't provide the definitive statement that you think is necessary.
> 
> However, looking over the code, I think it's reasonably clear that there
> is a FIFO policy for waiting threads, and I think the experiments above
> provide reasonable confirmation of that (modulo the strange case of two
> zeroes in a row at the beginning).
> 
> I will leave it to others to decide whether this is sufficient.
> 
>      Mark
> 

Hi Mark,

Thanks for the investigation. While I have not gotten a chance to play
with this myself, it does seem like your MVar implementation has a sound
basis.

While it'd be great to have someone familiar with the inner-workings to
step in and confirm your findings, it seems that your implementation
should work, at least from the scheduling perspective. I can not guess
what the actual performance might be as I'm not familiar with Guile's
performance profile but I suppose that's another issue.

Good work! Hopefully I'll be able to find time to play with this a bit
myself.

-- 
Mateusz K.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]