[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: capability address space and virtualizing objects
Jonathan S. Shapiro
Re: capability address space and virtualizing objects
Fri, 29 Aug 2008 09:47:41 -0400
On Fri, 2008-08-29 at 15:05 +0200, Neal H. Walfield wrote:
> At Thu, 28 Aug 2008 11:48:56 -0400,
> Jonathan S. Shapiro wrote:
> > 2. Who allocates [the buffer] storage?
> They are first class objects allocated out of activities (think space
Is this a persistent system?
> > 3. Are message boundaries preserved?
> I'm not sure what this means.
When I asked, I was thinking about pipes, where writes above some size
are not guaranteed to be atomic. It subsequently became clear that this
was not really relevant to what you are going.
> > Also, have you concluded that the double copy cost associated with
> > buffering is acceptable?
> A message buffer contains a capability slot identifying a data page
> (i.e., that can also be made accessible in the hardware address
> space). The data page contains a small header consisting of the
> number of caps and the number of bytes in the message payload. The
> remainder is the message payload. First there is an array of
> capability addresses, which the kernel looks up and copies to the
> message recipient.
So if I understand this, the payload that is actually enqueued is an
(address space, pointer) pair, where the pointer points to a message
descriptor that resides in sender space. If there is no means for "small
messages", this will offer problematic performance, but I definitely see
attractions in having this type of mechanism. Charlie Landau suggested
the same approach for unbounded messages in EROS and Coyotos several
But even so, that (address space, pointer) pair occupies storage in the
receive queue. Would it be correct to infer that the in-kernel message
structure is first-class?
> The user message buffer is only examined when the message is actually
> transferred to the target. Message transfer occurs as follows:
> - the kernel revokes the frame from the source user message buffer
> object (the next access of the source user message buffer will
> allocate a fresh frame),
I can see no reason why this revocation should be required. None of the
content that you describe as existing in this frame is in any way
sensitive, and there is no hazard to the kernel if the sender alters the
payload on the fly, provided minimal care is taken in kernel accesses to
The more serious concern -- and only if Viengoos supports this -- is
explicit revocation of the frame in mid-transfer.
> - the kernel finds the first MIN(source.cap_count, target.cap_count)
> capabilities specified in the source message buffer and copies
> them into the slots specified in the target message buffer,
Unless there is a very small bound on cap_count, this phase needs to be
> - the kernel copies the MIN(source.cap_count, target.cap_count)
> capability addresses from the target message to the source message
I must not be reading this correctly. Why would it be appropriate for
the kernel to disclose to the sender the addresses in the *target*
address space to which the capabilities were transferred, especially if
they will immediately be cleared:
> - the kernel clears the target.cap_count - MIN(source.cap_count,
> target.cap_count) capability address varies in the source message
> buffer, and
even if this clearing is quick, there is an incorrect temporary exposure
of target information in your description.
> - the kernel frees the frame associated with the target user message
> buffer object and assigns it the frame that was associated with
> the source user message buffer object.
Somewhere in all this I am reasonably certain that a data payload gets
copied, but that description seems to have gone missing.
I would not have expected the old target frame to be freed. Given the
road you seemed to be proceeding down, I anticipated that the protocol
would clear the target frame and then execute a frame exchange.
> > > A message buffer contains a capability slot designating
> > > a thread to optionally activate when a message transfer occurs.
> > I am not clear what "optionally activate" means here. If it is important
> > to the question that you are trying to ask, then could you clarify?
> An activation on message delivery is often not required. Consider a
> typical RPC: a client sends a message to a server and gets a reply.
> If the client gets a reply, then the message that it sent must have
> been delivered. Thus, the client does not require a delivery
Then I misunderstood completely. I do not understand how either the
server or the client are activated on message delivery. From the initial
description, I had thought that the purpose of the thread capability was
to notify the recipient that a message existed to be processed.
> > Ah. So what you mean to say is not that the activation is optional, but
> > that the presence of a thread capability in the buffer is optional?
> The thread capability is also required for looking up
> capabilities/capability slots.
Not so. As we have demonstrated in Coyotos, data pages and capability
pages can be mapped within a single address space. Something must name
the address space, and the thread capability is a reasonable choice, but
if the address space is first class then it could be named directly.
> > If so, I would suggest a change of terms. What you are describing as
> > "buffers" have traditionally been called ports or mailboxes. Generally,
> > a buffer holds payload, while the thing it is queued on is a port,
> > queue, or mailbox.
> A kernel message buffer can be queued on another message buffer. (A
> message buffer contains a head and node pointers.) The page of
> payload is associated with the message buffer.
> Do you still think it should be called a port? Is there some other
> better term?
I definitely think the name needs to change. When people here the term
"buffer", what leaps to mind is "some resource that contains the payload
of a message". They definitely do not think "a thing on which a message
can be enqueued", and I cannot envision a scenario in which it makes
sense to enqueue one piece of payload on a second piece of payload. I
can envision useful scenarios in which queues might be first class and
capabilities to them might be transferred, but I cannot envision a
scenario in which a queue should get enqueued on another queue.
The concepts of "the message being transferred" and "the destination of
transfer" seem (to me) to want to be clearly separated. If there is a
reason not to do this, I would be interested to understand it, but
offhand I can see only complications and confusions arising from what
you seem to be describing.
> I want to virtualize everything... My litmus test so far was
> cappages. I had initially thought that virtualizing buffers/mailboxes
> would be easy but now that I think about it, that is not the case: the
> operations that manipulate a buffer/mailbox also need to be
Yes. This type of reductio issue is precisely why load and store
instructions in EROS and Coyotos are semantically defined in terms of
the underlying capability invocations -- **even when they are performed
by the kernel**. In abstract, these could be synthesized (though
misaligned references turn out to be *very* problematic). Some
qualitatively similar form of resolution would appear to be necessary
Re: capability address space and virtualizing objects, Neal H. Walfield, 2008/08/28