l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: auth handshake and rendevouz objects


From: Niels Möller
Subject: Re: auth handshake and rendevouz objects
Date: 05 Nov 2002 18:39:45 +0100
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2

Marcus Brinkmann <address@hidden> writes:

> Which means that A has to be ready for receiving.

At least, that thread doesn't have very much work to do. Basically,
something like

  for (;;)
    {
      int response;
      receive(...);
      mutex_lock(transfer_lock);               /* 1 */
      response = check_if_transfer_is_ok(...); /* 2 */
      mutex_unlock(transfer_lock);
      send(..., response);                     /* 3 */
    }

If (1) takes long time, the task deserves losing. (2) should be
nothing more than a (hash-)table lookup, and it should be possible to
arrange the arguments sent back and forth such that the lookup is
*only* a bounds check, an array indexing, and a comparison of a few
integers. At most a dozen machine instructions. And (3) can perhaps be
combined into the same l4 rpc operation as the receive.

> Which could be setup, of course, but that would mean devoting a
> special thread for this _per handle passing_. Only one global thread
> would not work if you have high concurrency.

Do we really expect a high volume of rpc calls that transfer handles?
It's not obvious to me that one global thread won't do (either because
the probability that it's busy is low, or because rpc calls involving
handle transfers are serialized). And if one isn't enough, I believe a
thread farm with a small number of threads should be adequate. 

Given that the code above should require at most a few dozen of
machine instructions between two receive calls, it might work good
enough to do it the unix(tm) way: If the call S->A times out, let S
return EINTR to B, and have B wait a moment and try again.

A more serious problem for A is if the thread or the data it needs is
paged out. The code can probably be put into a shared page that's
never paged out (perhaps in the same library as the paging code
itself). As for the data, A needs to ensure that it is paged in before
starting the protocol. Something like

  send_handle(B, S, x)
  {
    mutex_lock(state.lock);
    if (!state.inprogress)
      pagein_and_lock(state);
    record_transfer(state, B, S, x);

    state.inprogress++;
    mutex_unlock(transfer_lock);
  
    send(B, S, x);

    /* I omit the code for cleaning up if the protocol fails. */    
  }

Background thread:

  for (;;)
    {
      int response;

      receive(...);

      /* If the request is valid, we can't page fault now. */
      mutex_lock(state.lock);
      response = state.inprogress && check_if_transfer_is_ok(state, ...);

      state.inprogress--;
      if (!state.inprogress)
        /* No more outstanding transfers, so it's safe to swap out
         * our data. */
        make_data_pageable(state);

      /* Should really be done atomically with the above call, so that
       * the mutex itself can't be paged out before we unlock it. */
      mutex_unlock(state.lock);
      
      send(..., response); 
    }

where the data pages can be made pageable again as soon as there are
no outstanding handle transfers.

One potential problem: A malicious task might flood A with bogus
messages. I'm not sure if and how we can deal with that.

/Niels




reply via email to

[Prev in Thread] Current Thread [Next in Thread]