info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CVS access control


From: Greg A. Woods
Subject: Re: CVS access control
Date: Fri, 28 Sep 2001 20:53:50 -0400 (EDT)

[ On Friday, September 28, 2001 at 23:32:55 (+0400), Tobias Brox wrote: ]
> Subject: Re: CVS access control
>
> First they send this really fancy toy - a pocket device creating one-time
> codes, in the mail.  It arrives in an unlocked mailbox, and it's handed over
> by some mailman that could steal the letter without any trace.  To be sure
> nothing bad happens, they send the PIN-code for the device in an ordinary
> paper-letter two days before the device is sent.

that's about a dozen times better than any Canadian bank or credit
union, I'm afraid to say....

(so far as I know no bank issues personal browser security certificates
and configures their secure servers to only accept connections from
browsers presenting valid, signed, certificates....  that would at least
get rid of perhaps half the concerns I have about client host security)

> > However as yet nobody's really identified any generic security policy
> > requirements for CVS that cannot be implemented with filesystem ACLs.
> 
> We have touched some few things the file system can't give with the current
> CVS implementation:
> 
> - ACLs on branches
> - ACLs specified down to individual files, not only directories
> - ACLs on branch and tag creation and redefining.
> 
> In addition, there might be possible to put separate permissions on adding,
> removing, committing, checking out, reading history, etc, etc.  Some of
> those things can be done through the file system, but not all of them, and
> you have to know CVS pretty well to deal with it.
> 
> I can hardly argue that any of those things are important.  Not for me, at
> least.  I can't tell for others.

I'm not sure ACLs on branches are meaningful at all to anyone, at least
not in the bigger picture.  I suspect anyone who thinks otherwise is
either not aware of the way security works in and with CVS, or is under
some dreadful misimpression about what kind of protection ACLs on
branches would afford in the real world.  Defining a policy is one
thing, but when you go to actually implement technical controls to
manage that policy you've got to weigh the cost of such an
implementation, both up-front and in long term usage impacts, against
the relative benefits as identified by a threat and risk assement.

ACLs on specified files can be easily achieved through the appropriate
use of a modern ACL-capable filesystem.  However I'm not so sure ACLs on
individual files are necessary either -- this is, after all, only a
matter of structure and hierarchy.  If any file is important enough to
have restricted access then certainly it is important enough to be
placed in its own directory.  Given normal traditional unix filesystem
semantics this is absolutely necessary anyway since any directory that's
writable by a user effectively causes the files it contains to be
writable by that user too.  Specifically in the case of CVS it would be
trivial for me with write access to the directory to bypass CVS ACLs and
trick some authorised person into working with not the "official"
version of a file, but rather with one of my specification.  Only by
putting files in a directory writable only by the authorised committers
can you be sure that unathorised changes cannot be made to them.

That leaves ACLs on branch and tag creation and deletion, which are very
CVS-specific operations which indeed might warrant additional controls,
and I've already describe a simple way to document policy and provide a
superficial audit trail, if not exactly enforce it in a fool-proof way.

> As long as people have write permissions to the repository, they can easily
> forge any audit trail.  That is a real worry, I think - and it can only be
> solved by some tripwire system.

There are many viable techniques for secure logging of an audit trail.

I'm not sure any of them are necessary in a versioning system though.

The versioning system is after all implicitly, and explicitly, an audit
trail already.  Sometimes it's necessary to audit the auditors, but in
this case that probably already happens through external procedure and
process, and out-of-band auditing is always far more preferrable.

> If you're implying that terrorist acts in America happened because of a
> "false sense of security", I could hardly disagree more.  In the real world,
> there is nothing like "real security".  People that are determinated enough
> to perform terrorist actions, and have enough resources, will always find
> new ways to do terrorist actions.

In the real world if you look at the facts here you'll find that the
people who carried out those terrorist actions were, in all but maybe
one case, able to do so to completion specifically because those
immediately around them had a false sense of their own security.

There's no doubt that it took determination.  However no amount of
determination will allow anyone to go so far if everyone around them is
equally paranoid and watchful.

> I can see situations where "a false sense of security" combined with
> "security through obscurity" would be very, very bad.  Take the Internet
> banking example above, for instance, the customer might be completely
> clueless about the technical details but very aware that all his money has
> disappeared.  The bank can insist that the customer himself did wire all his
> money to some Switz bank account, as they're cocksure their system is
> perfectly secure.

Yes, this is the most important point.  When a person in authority
(eg. your bank manager) tells you that their security is real security
(even though it's only security by obscurity, or even though it's
fundamentally flawed in other non-obvious ways), then you the customer
can be bilked for all you own and the very system that's supposed to
protect you will in the end protect the perpetrator more.

When people get on airplanes with the assumption that all their fellow
passengers are disarmed and harmless (and after all they went through
the very same metal detectors and were inspected by the very same
security officers at the airport) they clearly are not always able to
deal with what happens when their assumptions turn out to be false,
unless of course they have been trained (or otherwise learned on their
own) to think for themselves and to think outside the box.

If you had ever ridden with me on a commercial airline you'd undoubtably
have heard me make remarks about just how false the sense of security
everyone was under in many of the circumstances we passed through, and
how easily it would be to subvert any of the actual security there was.
The only places I've flown where I saw real security were places like
South Korea, Dubai, and a few places in Europe.  And that's just my
observations from the public side of the counter and the window seat --
who knows yet what gross inesecurities will be revealed behind the
scenes.

I don't know if I'd have been brave enough to confront a terrorist with
just an exacto knife, etc., or not, but I hope I would have been at
least if I could have caught him by surprise from behind....

> Still, it seems like a lot of banks actually can afford to lean on "security
> by obscurity".

That's because they're in a position of authority and their customers do
not question their declarations (often because of course they do not
have the expertise to do so, especially in technologically related matters).

-- 
                                                        Greg A. Woods

+1 416 218-0098      VE3TCP      <address@hidden>     <address@hidden>
Planix, Inc. <address@hidden>;   Secrets of the Weird <address@hidden>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]