[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

About explicit security bypass (Was : Re: Changing from L4 to something

From: Emmanuel Colbus
Subject: About explicit security bypass (Was : Re: Changing from L4 to something else...)
Date: Sun, 30 Oct 2005 22:54:11 +0100 (CET)

Jonathan S. Shapiro wrote : 

> > - Recovery after his own errors (for example, if the users should never had 
> >   access to the system speaker, but nobody noticed it before, the 
> > administrator 
> >   has to modify the configuration, but also to stop the annoying sound. It 
> > is not 
> >   realistic to believe that the administrators won't do any error);
> I agree. However, most administrator errors can be divided into two
> categories:
>   - mistakes in configuration (which we agree needs to be correctable)

Yes, but the problem is that we can't be prepared to any error which can
be done on a system. Therefore, sometimes, they will be no tool available : 
the administrator will need his knowledges, and all the data the system
can give him, to solve the problem. 

As a consequence, there is a need for some means giving the administrator a 
very extensive knowledge about the state of the whole system.

>   - errors that result from trying to bypass security and botching it.
>     The best fix for this is not to allow security bypass.

Theoretically, yes, but that's like unplugging a computer to protect
him : it solves a problem by removing a functionnality. As you don't want
the functionnality, it doesn't create you any trouble, but others (like
me) may disagree. 

So, let's discuss it : 

> > 
> > - Security bypass (!). I personnally think one should sometimes be able to 
> >   do anything on the system, even to damage it if he explicitly wants it,
> >   in order to handle _quickly_ any unexpected event. After all, the balance
> >   between security and availability has to be set by the owner of the 
> > computer;
> >   and he may not care really about security, but very much about 
> > availability.
> Wonderful! We have hit the first issue that I can identify where I have
> looked at the issue and said "yes, I understand the argument, it is
> credible, and I would not do it." Here is my answer for the Coyotos
> native OS. I am not convinced that this is the right answer for Hurd.
> I have three reasons for disliking this case:
> 1. Based on experience, the overwhelming majority of administrators do
>    not understand the complexities of current systems well enough to do
>    this properly and survivably. The issue you raise is still important.
>    The problem is that the solution you propose almost always leads to
>    a situation that is worse rather than better. A feature that leads
>    to mistakes 95%+ of the time is something you remove, not something
>    you justify.

A feature that you remove _or_ that you tell your users not to use. 
There is a problem, but it doesn't means _you_ (the system) have to solve 
it : I think it has to be solved by education, not coding.

> 2. If security is ever deactivated on a live system, it is almost
>    impossible to re-establish. We *know*, for example, that the average
>    time from power-up to penetration of a new Windows machine on a
>    cable network is 12 minutes and falling.

We could allow only some known users to deactivate the security, and
only toward themselves (that's the way the root account and the wheel
group usually work). The fact the administrator has bypassed all securities
doesn't means the security has been deactivated towards anything else
than his actions; and, if you consider his actions were "not aggressive",
then security hasn't been *ever* compromised.

> 3. Our current understanding of the proper balance between security and
>    availability is based on a "free rider" economy: we are not liable
>    to others for the consequences of our insecurity. I do not wish to
>    support this free ride in my system designs.

I understand. But I personnally think that the problem is in this absence
of liability, and that it has to be solved here; not in the system design.

> At a meta-level, however, I have a different view that brings me to the
> same conclusion.
> All of us on this list (myself included) are the second or third
> generation of the "computer lib" era. Our mentors (on in some cases,
> their mentors) saw computing as a way to liberate society and promote
> individual empowerment. It seems to me that this idea survives most
> powerfully in the free software movement, and it is a beautiful idea.
> Our notion that owners of computers are all administrators comes from
> this value system.
> The problem with this idea is that it is naive. The vast majority of
> users lack the specialized skills to be effective computer
> administrators or developers. When we fail to provide these users with
> "turn key" solutions, we *disempower* them. We create situations where,
> instead of being able to participate in the online world safely, they
> are vulnerable and fearful. This is not liberation. It is a form of
> social terrorism. Good intentions, but the result is terror on the part
> of the broad user base. And the users are captive: they are afraid, but
> they believe that getting off of computers would place them at an
> impossible disadvantage.

Yes (although it feels me more like many of them even currently ignore
that they lack these skills, and/or how important they are in fact).

> And in the corporate setting, where professional administrators *do*
> exist, we tend to forget that administrators are no better or worse than
> the rest of the population. A few are superb. The majority are
> imperfect, and a small number are actively hostile. It is very startling
> how many people in the grey hat and black hat communities started as
> system administrators. These are the very rare few, but they exist.

This classification is interesting, but I think it's incomplete.
We also have to take into account the skills of these people.

What if the administrator is highly skilled and hostile? Then, all your 
attempts to protect the users from him will simply fail : as he needs
access to the security-critical components of the system, he can simply
recompile them after having added all the hole he needed.

Worse, by claiming your system can protect the user from an hostile 
administrator, you have given the owner of the computer a wrong feeling
of security, which will certainly make the actions of the hostile 
administrator far more easy.

Now, what if he is highly skilled and absolutely honest? Then, by taking 
him some possibilities away, you will maybe make him unsatisfacted with your
system, and he will possibly try to use another one. Don't also forget two
points : firstly, some of these people's opinion will be followed by an
impressive number of least skilled ones, secondly, if a new system comes up,
it's of course most likely to be accepted at first by such people, so the
importance of this population is far higher than his numeric count.

And what if the administrator is hostile, but has only average skills? 
Happily, nobody can (currently) control the information on the Internet,
but it also means that the hostile administrator will be able to find
everything he needs to make his attacks somewhere. So he may also
succeed in his actions.

And even if the administrator has bad skills, he will remain able to
do some very dangerous actions : for example, reading or modifying
user's data (as he needs to be able to save and restore it).

> Whether to cope with evil or simply to survive the fact of natural human
> error, I think that we need to structure the choices in our systems
> better and more narrowly. I do not believe that arbitrary
> configurability is liberating. I believe that the socially and
> technically correct choice is to minimize options to a small and
> manageable number, and then let evolutionary pressure and feedback teach
> us how to adapt.
> This applies several orders of magnitude more strongly to security
> configuration options.
> Well, that's my opinion. 

Here is my one : 

1) The administrator has to be *always* *completely* trusted; and the
owner of the computer has to be aware that it is impossible to provide
*any* protection from the administrator.

2) The end user has to be *able* to bypass the securities, but not by 
default. The operations needed in order to do it could be a little bit 
complex, and should always be clearly labelled as "very dangerous", 
"for experts only" and "no normal administrator needs to do something 
like this", but it has to be possible.

This removes any false security feeling, keeps the "normal user" secure
(except if he is stupid enough to bypass the securities anyway, but
fighting human stupidity is a goal that would delay the Hurd some additional 
thousands of centuries), and provides the skilled ones with the ability to 
do what they want.

> It may not be the right view for the Hurd.

My own view may also not be the right one for the Hurd, and it's not up to
me to take the decision.

Unfortunately, it seems me that the one who takes the last decisions
here is RMS, as I believe he will disagree with me (I remind notably to
what he said about the wheel group in man su(1)). But I would
like to hear what the other people here think of my arguments.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]