l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Design principles and ethics (was Re: Execute without read (was [...


From: Marcus Brinkmann
Subject: Re: Design principles and ethics (was Re: Execute without read (was [...]))
Date: Sat, 29 Apr 2006 11:11:26 +0200
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Sat, 29 Apr 2006 01:22:27 -0400,
"Jonathan S. Shapiro" <address@hidden> wrote:
> Perhaps I have misunderstood your position on confinement. If so, I
> invite clarification.

You did.  

> The problem with your scenario is that it presumes you know what the gun
> will be used for. Certainly, if I come to you and say "Can I borrow your
> gun so that I can shoot Bas (nothing personal)." then you should say
> "absolutely not."
> 
> However, if I instead come to you and say "Can I borrow your gun?" it is
> another discussion entirely. A gun is a tool, and there are legitimate
> uses for it. Perhaps I wish to put a damaged animal out of its misery.
> Perhaps I have reason to know that Bas is planning to break into my home
> and I wish to defend my home. Perhaps I merely wish to shoot small
> pieces of lead at inoffensive paper targets. Perhaps I plan to destroy
> the gun. Without further discussion, you do not know whether the use of
> the gun is proper.

This is true.  However, in the second scenario, my next question would
be: "What do you want to use it for?" and then you can tell me what
you want to use it for.  Then the next thing I'd say is: "I do not own
a gun."  Then we would have a long discussion about interesting
subjects, like, what is the safest and least painful way to put a
damaged animal out of its misery (hint: this involves a veterinary, at
least in preparation), or the effect of the presence of hand-fire
weapons in a situation of emergency (which can substantially increase
your risk of being killed) and about the entertainment value of
possessing destructive power (and what other values one can enhance
with play).

Still, although I think your examples are not very good, I can go
along with you and make up my own examples.  Say I would live in
Canada in a remote place where bears are strolling around, then I
would consider owning tools that can deal with a dangerous bear trying
to take apart my family.  So, with many caveats and only under
extraordinary circumstances, a weapon may be a useful tool, although
my suspicion is that even in these cases, we could develop better,
safer and more effective tools if we wanted to and research would be
directed to the cause.

> In the same way, if I come to you and say "I would like a confinement
> solution", you do not know without whether I plan to use it for DRM or
> for privacy.
> 
> Your position on confinement appears to be "It supports DRM, therefore
> it should be banned". This is similar to the position "guns can be used
> to kill people, therefore guns should be banned." Both positions are
> dogma: they do not admit reason. Both positions are, in fact, immoral.
> They reject the legitimate uses of both tools for which no currently
> known alternative tool is realistically feasible.

Well, you just flat out got it wrong.  My position is nowhere as
simplicistic.

First, let me address your moral objection: It's wrong.  If a tool is
dangerous and needs to be controlled, then it is dangerous and needs
to be controlled.  The absence of better tools is not a sufficient
reason to allow it.  It's a good reason to develop better tools,
though, assuming that there are legitimate _goals_ in the first case
for which a use is perceived to be desired.  That this is actually
true is easily seen when you look at how highly controlled drug
production, sales and use are, or many substances used in the
industrial process.  The legitimate uses, if they exist, may be a
reason to allow controlled use of a technology, instead of just
banning it right away, but that's about it.

Going back to confinement, let me state it very clearly, once and for
all, because you keep getting it wrong:

  * * *   Every process in the Hurd will be confined.   * * *

It will be confined because it was created by its parent, so it meets
the definition of confinement in the most trivial sense.  To
successfully talk about this subject, we need a better vocabulary
here.  My current suggestion is to differentiate "trivial
confinement", which is what the Hurd will do, and "non-trivial
confinement", which is the confined constructor design pattern.  This
is not a particulary good choice, but at least it provides a clear
separation.

My position on the confined constructor design pattern, ie non-trivial
confinement, is NOT that "it supports DRM, therefore it should be
banned".  My position on the confined constructor pattern is: "I have
looked at ALL use cases that people[*] suggest for it, and find all of
them either morally objectionable, or, in the context of the Hurd,
replacable by other mechanisms which don't require it."  Note the
absence of the word "ban" here.  Note the absence of the term "DRM".

[*] These people include you and the professor of the Applied Data
Security group at my university.  Also, I have listened to many
suggestions from other people and papers, and the world wide web.
Heck, I even considered Linus Torvald's bogus example of the "private
diary" that he recently proposed in an interview.

I can not right now and here fully lay out the complete argument for
that position.  For now, you will have to take it at its face value.
I have some theories about _why_ there are no use cases I find
legitimate, but they are still somewhat immature.  It has to do with
questions of ownership and control, which are intrinsically political,
non-technical subject matters.  I will give a hint at the end of this
mail.

> Banning a device (or a technical means) is justified ONLY when a society
> concludes that the overwhelming majority of actual uses are harmful to
> others. Both requirements must be met: the overwhelming majority of uses
> must be harmful, and the harm must be caused to third parties.
>
> In my opinion, you have not satisfied either test. In fact, I do not
> believe that either test *can* be satisfied today. There is an
> insufficient base of knowledge about the uses of confinement from which
> to draw any conclusion.

Sometimes it's very hard to prove that something does not exist.  I
have some abstract arguments why the confined constructor model is
likely to violate ethical boundaries.  However, it is not a
requirement to demonstrate that a legitimate use is impossible.  It's
sufficient to decide that there is no legitimate use in sight, or
likely.

> If I really wanted to ban something, I would ban software. Software has
> been responsible for *far* greater harm than DRM. Think about it.

Which just proves that your metric (or let me say your insinuated
metric) of how to decide what to ban and why is completely false.
Luckily, it is not my metric.
 
> > You seem to think that a principle and a dogma are two different
> > things.  I don't see why.  They seem to differ mostly in what people
> > connotate with them.
> 
> A dogma is a position that does not admit of change or reason. It is
> therefore irrational. A principle is a position based on the best
> available reason, but is subject to change in the face of better
> information, new facts, or clearer understanding. The DRM position is a
> dogma.

Well, whatever "the DRM position" is, I don't support it then,
accepting your definitions of the words for the scope of this discussion.

> > > ...The people I love around me somethimes have
> > > to protect me from myself, and I sometimes have to protect them form
> > > themsleves. And we are generally wery grateful that we did that to each
> > > other. That could be the case between Alice and Bob here.
> > 
> > Yes, but that is a social contract between Alice and Bob, and I don't
> > think that's a good guiding principle for an operating system design.
> 
> As an operating system architect, I am substantially more expert and
> more knowledgeable than my users. I can anticipate, and architect to
> prevent, errors and compromises that most users cannot even recognize!
> Any knowing failure to do so is an ethical compromise. Any failure to do
> so that leaves my users open to non-consensual harm is immoral. Note
> that a user cannot give consent concerning issues that they do not
> comprehend.
> 
> I believe that rejecting confinement as a basic building block is a
> profoundly unethical decision.

Interesting.  Because the exact line of argument leads me to
vehemently reject "non-trivial confinement".

Non-trivial confinement is a grave security threat.  It compromises
the users autonomy, or what I called the "principles of user freedom":
The ability of the user to use, inspect, copy, modify and distribute
the content of the resources that are attributed to him.  It wriggles
control over the resources out of the users hand, and thus amplifies
existing imbalances of power.

I introduced the user's autonomy (or the user's freedom, if you will)
as a taxometer, which allows me to reason theoretically and
_objective_ about certain technologies.  It happens that so far all
use cases I found morally objectionable were "detected" by this
taxometer.  There were also a couple of cases (the cut&paste protocol
in the EROS window system, and some system services) which I did not
find morally objectionable which were "detected" by this taxometer.
It happens that in those cases I could meet all other design goals and
almost all other design principles (except for POLA, which is in
obvious conflict) by a modified design.  Only this experience allows
me to state my position above with confidence.

The net effect is that I find this taxometer a useful design principle
in itself: It allows me to reject designs that compromise the user's
autonomy.  My plan, which has been busted by this preliminary
discussion, is to introduce a new design principle, user freedom,
which allows me to do exactly that.  I will do this, in all formality,
and with an extensive argument, at some later point.  Consider this a
sneak preview.

To date there has not been any need to state user autonomy, or user
freedom, as a design principle, because all operating systems
fulfilled it in the trivial sense.  Upcoming technology threatens to
violate it, so I think it is very appropriate to counter this by a
clearly stated objective which provides a differentiation criterium
that can be evaluated on its own merits.  Although I admit some
personal bias in the process of getting there, the taxometer itself is
objective in nature and can be evaluated neutrally.

I do not consider the above line of argument complete.  More needs to
be said on the definition of the used terms, and the implications.
Also, I have not yet shown how the non-evil uses can be replaced by
other mechanisms.  However, I have already said more than I wanted to
at this point (exactly because I know it is not a complete argument),
but also I have run out of time, so I gotta run now.  Just one last
note:

I appreciate it that you want to understand my position and ask for
clarification.  But to really understand my position, you have to stop
simplifying it.  Because it is intrinsically bound to questions of
ownership, non-trivial confinement is not a purely technical subject
matter but has a political (in the purest sense of the word)
dimension, and that means we are talking about a very complex subject
matter about which little is known with scientific certainty.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]