[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The Perils of Pluggability

From: Jonathan S. Shapiro
Subject: Re: The Perils of Pluggability
Date: Mon, 10 Oct 2005 08:37:45 -0400

On Mon, 2005-10-10 at 11:06 +0200, Ludovic Courtès wrote:
> Users should also be able to run third party servers (e.g. one can
> download a filesystem server implemented by an unknown hacker, compile
> it, run it, and use it, even if the system's administrator doesn't like
> it).

Yes and no. This is a nice goal. The problem is that actions by one user
have impact on another user. Please explain why your proposition is not
equivalent to the following:

  I and my neighbor are farmers. We grow wheat in adjacent fields.
  I should be able to set fire to my wheat without regard to the
  hazard to my neighbor.

Still, I would like to build systems that can handle what you propose.
Doing this ethically requires architectural support.

>   When doing so, users have to be aware of whether/how risky it is
> to run this code.

You appear to be saying that "the responsibility lies with the user."
This is a wonderful principle. Unfortunately it ignores the reality that
the vast majority of users cannot do that. These users are not bad
users. They simply do not understand -- and never will understand -- the
complexities of computing systems well enough to satisfy this principle.

I believe that you drive a car. Can you explain its construction down to
the last nut, bolt, and circuit? Can you clearly state the full
implications of changing your type of gasoline, or inserting a fuel

Perhaps you can. How about for additions to your house, or for hundreds
of other daily actions that have complex consequences? Not everybody can
be a software expert.

I do not propose that we should deprive the user of choice. Rather, I
propose that we need to design systems where the following properties

 1. The majority of the time, the consequences of actions do not
    violate expectations.
 2. When they do, the consequences are contained and recoverable.

>   Here, whether the suspected program is a server or a
> "regular application" makes no difference.


> Fortunately, the libre software world comes with some sort of a
> "reputation mechanism" which allows users to get an idea of how much
> trust they can put in a program.  And this is far superior than
> centralized program certification systems (think of "trusted
> computing"...) where there's actually a single point of trust: the
> company which certifies programs.

ESR and I have argued about this since he first coined the term "many
eyeballs effect." It would be nice if this effect turns out to be real.
I hope that it *does* turn out to be real. However, here are the facts

  There is absolutely no credible evidence that the many eyeballs
  effect is real. To my knowledge, there has been no attempt to do
  any quantitative measurement of benefit, and every qualitative
  argument I have heard can be accounted for by other effects.

  There *are* clear, credible, and quantifiable benefits of third-party
  certification systems. These have been repeatedly measured for many
  different certifying systems. A few certification systems have
  certainly turned out to be bad. Most of the systems measured have
  turned out to have significant benefit. In particular, the Common
  Criteria system has quite dramatic measurable benefit.

  The argument that certification implies a central source of trust
  is wrong. At worst, certification only implies a central source of
  trust for a single certification.

I remain hopeful that open source will turn out to be more secure. I
think there a bunch of social reasons why this *should* be true.
However, given the documented facts in hand, I would say that any user
who relies exclusively or primarily on the fact of open source as the
source of their security is naive, and any *engineer* who does so is a
hazard to their customers.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]