l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Design principles and ethics (was Re: Execute without read (was [...


From: Bas Wijnen
Subject: Re: Design principles and ethics (was Re: Execute without read (was [...]))
Date: Sun, 30 Apr 2006 03:52:26 +0200
User-agent: Mutt/1.5.11+cvs20060403

On Sat, Apr 29, 2006 at 08:09:09PM -0400, Jonathan S. Shapiro wrote:
> > Going back to confinement, let me state it very clearly, once and for
> > all, because you keep getting it wrong:
> > 
> >   * * *   Every process in the Hurd will be confined.   * * *
> > 
> > It will be confined because it was created by its parent, so it meets
> > the definition of confinement in the most trivial sense.
> 
> This is complete nonsense. The confinement property states:
> 
>   A confined application can only transmit data through authorized
>   channels.
> 
> However, any reading of the original paper makes clear that the
> definition of confinement occurs in a context:
> 
>   - There is a process that is attempting to transmit.
>   - The process is free from external coercion in regard to
>     transmission. That is: transmission requires both permission
>     **and intent**.
> 
> What Marcus describes is a situation where (a) the parent establishes
> the authorized channels and (b) the parent can spy on the child's state.
> The second provision violates the requirement for intent.

Huh?  Why can't the child intend to transmit if it was started by the parent?
We are talking here about things like browser plugins.  The parent process
received the code somehow, doesn't have a means to verify it (in a good way,
say as the ideal (non-existing) virus scanner would do), and simply executes
it in a box.  The child can do all kinds of things that the parent doesn't
expect.  I don't see any relation here with being able to look at the contents
of the memory.

> So: what Marcus calls "trivial confinement" is not confinement at all. I
> do not agree with what he proposes, but the policy that he proposes is
> not morally wrong. I *do* object very strongly to calling it
> confinement, because it is not confinement. What Marcus actually
> proposes is hierarchical exposure.

That too, but that's not the reason it's confinement.  It's confinement
because the child process cannot communicate with anyone, except with explicit
permission of the parent (in the form of a capability transfer).

> Let me now go back to our discussion of system administrators and
> backup, which will illustrate that this is insufficient.
> 
> Marcus proposes that any "parent" should have intrinsic access to the
> state of its "children". This property is necessarily recursive. It
> follows that the system administrator has universal access to all user
> state, and that "safe" backups are impossible.

Nonsense.  As you said yourself a few months ago, the administrator might not
have the right to touch everything.  Many parts of the machine are simply not
owned by the administrator, so he cannot look at them.  User sessions are
typically part of those.  Even with non-trivial confinement the kernel can
still spy on everyone, because it has access to all memory.  Does that make
the system less secure?  No, because the kernel is part of the TCB and we know
that it _doesn't_ give access to the memory, even though it could.

> Further, it follows the cryptography is impractical, because there exists no
> location on the machine where a cryptographic key can be stored without
> exposure to the administrator.
> 
> That is: in Marcus's proposal, there is no possibility of privacy.

I believe I have disproven that statement.

> [Picture of silly proposal plummeting to the ground, leaving large
> cartoon crater.]

But I like pictures anyway. ;-)

> > My position on the confined constructor design pattern, ie non-trivial
> > confinement, is NOT that "it supports DRM, therefore it should be
> > banned".  My position on the confined constructor pattern is: "I have
> > looked at ALL use cases that people[*] suggest for it, and find all of
> > them either morally objectionable, or, in the context of the Hurd,
> > replacable by other mechanisms which don't require it." 
> 
> Excellent. Please propose an alternative mechanism -- ANY alternative
> mechanism -- in which it is possible for a user to store cryptography
> keys without fear of exposure. If we can solve this, then I am prepared
> to concede that we can store private data in general.

In general, keep the chain of parents short and trusted.  The upper parent
will be the kernel, or the top-level space bank.  Very soon after that comes
the user session (possibly immediately).  In order to minimize the risk of
exposure, the user session should be the direct parent of the object which
holds the keys.  In general, if all parents are trusted (TCB or "manually"
trusted by the user), the keys cannot be exposed.  Obviously this is more true
if you make the key-holder incapable of leaking the keys.  I am assuming here
that the TCB and user session are made without the ability to give out their
children's memory for inspection.  This seems useful to me, at least.

> However, I do not believe that this can be solved in principle without
> true confinement (as opposed to trivial non-confinement). There is a
> fundamental bootstrapping problem.

I'd like to hear why the above is not possible.  The fact that a process
technically can inspect the memory doesn't mean it has to.  And it certainly
doesn't mean it has to proxy this ability to anyone who requests it.

> We are discussing a very important, foundational point. I believe that
> this debate should be public, that it should be uncompromising, and that
> it should evolve over time. Your ideas are incomplete. So are mine. Let
> us start a Wiki page for this discussion that will allow us to evolve
> it. Such decisions NEED the light of day.

Personally, I prefer the mailing list for discussions.  It would be a very
good idea if the resulting conclusions are archived in a better way than
"somewhere in the list archives".  For that a wiki is useful.  But I wouldn't
want to need to poll web pages in order to see if someone said something.

> > Non-trivial confinement is a grave security threat.  It compromises
> > the users autonomy, or what I called the "principles of user freedom":
> > The ability of the user to use, inspect, copy, modify and distribute
> > the content of the resources that are attributed to him.  It wriggles
> > control over the resources out of the users hand, and thus amplifies
> > existing imbalances of power.
> 
> Nonsense. In true confinement, the user remains in a position to say "I
> elect not to run an undisclosed application". Please explain in what
> sense this constitutes a loss of control or a loss of security.
> 
> No. Your real concern here is that the user will not *choose* to
> *exercise* this control. Your objection, fundamentally, is that users
> will not accept the dogma that you propose, and you therefore plan to
> preempt their right to choice. Ultimately, you justify this on the basis
> that these short-term decisions imply bad long-term outcomes, but you
> neglect the point that the users have a *right* to choose those bad
> long-term outcomes, even when they do not understand them.

If I am an expert, and I see that most people are not knowledgeble enough to
make the right decision for the long term (as you said, not because they
choose that, but because they simply don't understand), and it is within my
power to make that outcome better, that is, I can make a better world by not
letting people choose something they do not understand anyway, then IMO it is
my moral obligation to not give them that choice.

> This is not defense of morality or ethics. It is a classic example of
> unethical behavior.

> If I have a right to choice, it is a right to *stupid* choice.

Choice is not a right in all situations.  It is not wrong to not offer a
choice.  It _may_ be nice to offer a choice, but even that is not always the
case.  Example: In the Netherlands, our health-care system has just been
changed, and now we _must_ choose all kinds of things.  Nobody I know knows
_or wants to know_ about the things we must choose.  But making the wrong
choice can still have very bad effects.  IMO it was morally wrong of our
minister to impose this system on us.  (He has other thoughts, and at least
claims that he did it because he thinks that things become better from it.
I'm not sure if he believes that, too.)

> The ethics you propose are the ethics of Mao and Stalin, not the ethics of
> reasonable adults.

First you say that people may not understand the things you want to force them
to choose, but still they should be considered "reasonable adults"?  A
reasonable adult will refuse to choose in such a situation.  However a
real-world adult will simply choose whatever the commercial tells him, because
he doesn't know what it's about anyway.  I don't think it is ethical to
support this pattern.

> You propose to solve *your* long-term social objectives by undermining the
> social process of consensus.

What consensus?  The idea was that people didn't understand, and that's _not_
going to change.  People don't care about things at this level.  That's what
experts are for.  Which means they want us to solve it, not to give them a
choice.

> If there is a better definition of evil, I do not know it.

I do.  Evil is when a person acts in a way that is against his or her own
moral values.  That is, he or she does things which he or she thinks will make
the world worse.  Intent is important here.  As Napoleon is said to have said:
Never attribute to malice that which can be adequately explained by stupidity.

> The MORAL behavior would be to escalate the issue into public awareness, and
> seek to change the public decision process OPENLY.

Not at all.  People don't want to choose everything.  Lots of things should be
sorted out by experts.  If there are more ways to do it, and it doesn't matter
for the end result, then let the experts decide something.  If it does matter
for the end result, then they may provide a choice, and it should tell them
what happens to the end result.  If the explanation is not understandable,
then they shouldn't have been given the choice.

In particular, trying (and almost certainly failing) to get the public to
understand this and care about it is a noble thing to do.  Relying on
succeeding in it for your plans (which are otherwise evil, because they will
make the world worse) is very naive at best.

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]