l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ipc security


From: Bas Wijnen
Subject: Re: ipc security
Date: Thu, 14 Oct 2004 16:13:10 +0200
User-agent: Mozilla Thunderbird 0.8 (X11/20040926)

Volkmar Uhlig wrote:
-----Original Message-----
From: Bas Wijnen
Sent: Friday, October 08, 2004 7:41 PM
>>
My problem is on page 63 of the reference manual, where Xfer pagefaults are described. It is specifically stated that either side can be starved by a malicious pager from the other side. The solution to this is not to use string items. You seem to think that specifying a 0 timeout will also solve the problem (by aborting at the first pagefault.) I don't see anything about 0 timeouts in the manual, but the presented solution (don't use strings) suggests to me that it doesn't help.

Xfer timeouts are checked when a pagefault occurs, assuming that the
copy operation is relatively short running and thus more fine-grain
timing only incurs overhead.  The pagefault IPC is sent with a timeout
which reflects the remaining Xfer time.  If the malicious pager doesn't
respond in time the IPC gets aborted.  (The current implementation of
Pistachio may slightly vary, though.)  Overall, string IPC can be bound
and is specifically there to have a trusted memory transfer.

That sounds useful (although I agree with Marcus that a local/remote distinction seems more logical than send/receive.) However, I don't understand the paragraph in the reference manual at all anymore. I'll quote it:

<quote>
Xfer pagefaults happen while the message is being transferred and both sender and receiver are involved. Therefore, xfer pagefaults are critical from a security perspective: If such a pagefault occurs in the receiver's space, the sender may be starved by a malicious pager. An xfer pagefault in the sender's space and a malicious sender pager may starve the receiver. As such, xfer pagefaults are controlled by the minimum of sender's and receiver's xfer timeouts.

However, xfer pagefaults can only happen when transferring strings. Send messages without strings or receive buffers without receive string buffers are guaranteed not to raise xfer pagefaults.
</quote>

So, I now think that the "starving" part can only happen when using infinite xfer timeouts, is that correct?

Assuming I have now understood how it works, some more thoughts on the Hurd servers:

Even if xfer timeouts could be set for local and remote, that still doesn't really solve the problem. As the server will never use a remote timeout other than 0, the client must always have the memory to receive the string in available. This means it somehow has to make sure it cannot be swapped out. Touching it just before the IPC does not guarantee that it isn't swapped out before the string is transferred (especially if pagefaults may happen in the server's space, because it may need to swap some pages in, which of course means others are swapped out.)

So either every server which allows string transfers must always allow "could you repeat that?"-requests, or there must be a way to tell physmem certain pages should not be swapped out. While having such an option may be useful, extensively using it doesn't sound like a good idea to me (if only because we may want to make it a restricted request.)

I think the best solution is still to use a container from a container manager. I'll explain again how that would work, because I have the feeling I wasn't clear the previous time.

A container manager is started for every user, and in general, tasks of that user will trust it (those which don't can start their own container manager if they want.) More importantly, the container manager trusts all tasks of the user (perhaps it wants to use a capability for this, so the user can throw it away before running untrusted tasks.) This makes string transfers with infinite xfer timeouts possible. When a task wants to transfer a string, it asks its container manager to make a container with the server if it doesn't have one yet, and tells the server to put the item in the container. This is of course slow for a first time, but for all further transfers of strings from/to that server by that user, the container will simply be reused. This means that during normal operation, it's only a simple IPC (with just a few MR's, to specify the task ID of the container) and a memcpy. After that, the container manager can do a string IPC to the task (or before, if the string has to be sent to the server.)

So the idea of a container manager is to remove the costs of setting up a container, while still having the benefits of it. At this moment I think this would be the best way to transfer strings.

If you still don't think so, please let me know why not, and what would be better.

Thanks,
Bas

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]