[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

future of libports

From: Marcus Brinkmann
Subject: future of libports
Date: Sun, 12 Oct 2003 03:32:45 +0200
User-agent: Mutt/1.5.4i


I have some larger changes in mind for the libports replacement on L4
(libhurd-cap).  I want to share the fundamental consequences, and if
you can think of a situation where it would be too restrictive, please let
me know.  I checked the current code base, and it seems to be ok so far.
For the following, you can think of a capability as something equivalent to
a libports port object.

* While right now ports are inlined into objects like protids, I will
  inline objects into capabilities.  This way, objects will become part of
  the capality system, and several optimizations are possible.  For example,
  caching slab allocators are automatically used for such objects.  One
  consequence is that all objects that are related to each other (are in the
  same class) must have the same maximum size (you can use hook pointers of

* Of course, as Mach ports won't exist anymore, there are no ways to
  extract, insert, remove, or transfer port rights from or to capabilities.
  The capability itself is the only way to express something that resembles
  the current "receive right".  I checked the code carefully, and all cases
  where we currently use something like ports_transfer_right can be rewritten.
  Some functions will remain in different clothes.  For example,
  ports_reallocate_port really means: Revoke access to this object from all
  users, and that function can be implemented for capabilities as well.

* Buckets and classes will be merged into a single "class" object.
  There are many reasons and consequences, but the most convincing one is
  probably that we don't really treat them very independently in the Hurd
  code.  Often there is a 1-1 correspondence, in other cases several classes
  are lumped into a single bucket.  I will strengthen the idea that a
  "class" represents the type of an object, which includes information about
  the storage size, constructors/destructors, and the demuxer(!).
  The reason why a bucket is not useful anymore has to do with port sets and
  the lack thereof in L4, but also with the fact that in L4, the server
  thread ID is advertised to the client and needs to remain constant over the
  lifetime of the object.

* Inhibiting RPCs: There are three uses of RPC inhibition right now:
  + goaway RPC: I think we should be nice here and first check if there are
    any users right now, before inhibiting RPCs, and only inhibit RPCs if
    there are _no_ users (to make sure no new ones pop up), and then check
    again to be sure.  This way, the operation will never be expensive
    (either because there are no users, or because there are ;)
  + user RPCs like io_revoke, which need to iterate over all objects.
    Actually, io_revoke is the only case, and my current plan is to drop it
    entirely.  io_revoke is a nice DoS attack to slow down the filesystem,
    and is not used at all anywhere.  What's the point of it?  If it is
    needed later, a smarter implementation would use reverse lookup hashes to
    find all protids for a given node, for example.  That would keep lock
    contention local to the object.  (Such a solution would have a higher
    memory footprint and be pretty complex to implement, though).
  + privileged RPCs like remount.  Here the question is: Does the operation
    need to be interruptible?  It doesn't seem to be interruptible now: The
    inhibit_rpcs operation itself is interruptible, but it is not really a
    blocking call, although it uses a condition for synchronization.  The bunch
    of the time is surely spent in iterating over all ports, and there doesn't
    seem to be a cancellation point in the function iterated over.
    It seems to me that it would be forgivable if such a function would not
    be cancellable, in particular as unlike io_select, it does have a
    bounded run time (eventually, it will complete, it can not block forever).

  The conclusion here is that by either eliminating inhibition of RPCs at
  the class level, or by arguing away the need to be able to cancel such an
  operation and interrupt it in the middle of it, we can simplify the server
  code a lot.  The server thread (one per class, as there are no buckets)
  could just stop to receive messages from clients.  There would be no
  uninhibited_rpc list anymore.
  I should add that there is no hard reason to not keep it the way it is,
  and continue to accept messages, which are then blocked in the worker
  thread.  But keeping blocked worker threads around that do nothing except
  holding a pending message and waiting for the time they can process it, is
  quite expensive I think.  An alternative would be to queue incoming
  messages, at the cost of copying it to a buffer.  But I guess once you
  start about talking inhibiting RPCs, such a cost becomes acceptable.

So, these are some of my thoughts.  I hope I was clear while still
witholding unnecessary and confusing details.  I think that the above
constraints can make for a simpler and more efficient implementation.  The
last point about inhibiting RPCs is the one where I would expect most
reluctance in accepting it, but looking at the type of operations that
really require inhibiting RPCs (shutdown, remount), I wonder why not.

Anyway, just thought I would give you a heads-up to to what I am working
on, and where I am heading.


`Rhubarb is no Egyptian god.' GNU    address@hidden
Marcus Brinkmann              The Hurd

reply via email to

[Prev in Thread] Current Thread [Next in Thread]