[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Comparing "copy" and "map/unmap"

From: Jonathan S. Shapiro
Subject: Re: Comparing "copy" and "map/unmap"
Date: Mon, 10 Oct 2005 14:24:44 -0400

On Mon, 2005-10-10 at 16:09 +0200, Matthieu Lemerre wrote:

> OK.  This works because you manage that every single byte of ressource
> is allocated by the client.  Thus, upon client destruction, every
> object is automatically destroyed without the server knowing it.
> We originally planned that metadata used to manage the ressources was
> allocated by the server (because, I think, we do not know how to
> achieve this on L4, where the smallest allocation unit is the page).

The smallest unit of allocation in Coyotos is also (for practical
purposes) the page. For example, our current FS implementation allocates
in units of 4K blocks for efficiency. A more sophisticated
implementation could certainly fix this.

But I think that the real difference lies in the fact that our system
tends to favor designs where storage used by a server is allocated from
a homogeneous source. This greatly simplifies matters.

> BTW, I'm interrested by your space bank solution in the other mail.
> How do you manage that the server does not write beyond the space bank
> for instance?  Does each server has one page where their allocated
> space banks are allocated, but on behalf of the client by the kernel?

I think that I have not described the space bank clearly.

>From the perspective of the client, a space bank is a server. In actual
implementation, it is an object implemented by the space bank server.
The operations on this object are things like:

        buy [123] page(s) => cap[]
        buy [123] node(s) => cap[]
        destroy page <cap> => void

There is nothing to overrun.

I think I have said previously that storage management is all handled at
user level. I meant that quite literally. The kernel does not do

> > For POSIX, you definitely need reference counts, but these are not
> > capability-layer reference counts. These are reference counts that are
> > implemented by the POSIX server, which is an entirely different thing.
> > There is absolutely no need for these to be implemented in the kernel.
> In the Hurd, we don't have something like a POSIX server.  I hope that
> it would still work if this POSIX server was split into several
> servers, but I would have to study how you do reference counting on
> your POSIX server first.

When I talked about this with Neal and Marcus, I made the following

The POSIX API assumes a fairly tight integration around the process
structure. In particular, there are very close interactions involving
the signal mask. While a multiserver implementation can be built,
portions of the process state tends to end up in memory that is shared
across these servers in any efficient implementation.

Further, process teardown is always done in one particular server. This
is the place that should be responsible for the reference counting.

> I think that my example (again :)) with the notification server does
> not fall into these two categories.  A client would allocate some
> space on the notification server, to receive messages.

Can you describe for me what the notification server does in your

> So when the client is done with a server, it could revoke the
> capability it gave to it to give it to another server.  By doing so,
> it ensures that there is always only one sender of a message to a
> message box.

If you need one sender and you potentially have many, it sounds like a
queueing mistake somewhere, but I would like to wait until I understand
the messaging server protocol better.

> This is an example, maybe we don't need a message server in EROS (we
> planned to use this for blocking RPCs).  But still, a similar example
> could occur (revocation to ensure exclusivity).

Let me take this up separately. There *is* an issue here. I am not sure
if our solution would work for you, but at least it will provide
something to consider and possibly something to react to.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]