[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] GNU Arch review - am I accurate?

From: Andrew Suffield
Subject: Re: [Gnu-arch-users] GNU Arch review - am I accurate?
Date: Sun, 7 Mar 2004 22:24:52 +0000
User-agent: Mutt/

On Sun, Mar 07, 2004 at 09:15:40PM +0000, David A. Wheeler wrote:
> Andrew Suffield said:
> > > The mirroring capability is clever, but if you download a mirror and
> > > make a change, you can't commit the change and the tool isn't smart
> > > enough to help.
> > 
> > I can't see that there's anything it could or should do. tag, then
> > commit - it's not complicated. Using undo here is a daft idea.
> But if you've acquired stuff from a mirror, as long as the mirror
> is up-to-date there's no reason that commit should cause a problem.
> Tla should be able to find the 'real' archive and commit instead.

Oh, you're talking about when you've used --mirror-from for a local
cache? Yes, that sucks. I never do that. I've never found a good
reason for not committing on the box where the master of each archive
is hosted (or at least on the same local network).

> > > Arch will sometimes allow dangerous or problematic operations that
> > > just shouldn't be allowed. For example, branches should be either
> > > commit-based branches (all revisions after base-0 are created by
> > > commit) or tag-based branches (all revisions are created by tag);
> > > merging commands will not work otherwise, yet the tool doesn't
> > > enforce this limitation.
> > 
> > I've seen this one floating around as hearsay for a while now. I don't
> > believe any such limitation exists. Seems to work for me. ...
> But:
> Says:
> "Usage Caution: As a rule of thumb, your branches should be either
> commit-based branches (all revisions after base-0  are created by commit )
> or tag-based branches (all revisions are created by tag ).
> Commands such as replay , update , and star-merge  are based on the
> presumption that you stick to that rule. While it can be tempting,
> in obscure circumstances, to mix commit  and tag  on a single branch --
> it isn't generally recommended."

Well that wasn't helpful. Tom, what were you thinking about when you
wrote this?

> > > The recommended GNU arch setup for a central repository has all
> > > users sharing a single account
> > 
> > Who's been recommending that? The recommended setup is to not use
> > "central repositories". Use properly distributed archives. If you
> > think you want a "repository", we've heard it all before, you don't.
> It's all in the "how to" section for centralized development:

Meh, wikis are about as accurate as an infinite number of monkeys.

> > You focus on problems caused by this scenario a fair bit - but it's
> > not a scenario you should be going anywhere near. You do not want to
> > do things that way.
> Sometimes I do, sometimes I don't.  But MOST development projects
> _DO_ use a centralized repository, and there ARE good reasons for it.
> If nothing else, many people DO want them.

Heard that before. Hasn't been true yet.

> > > The signatures sign the revision number as well as the change itself
> > > (they're both encoded in the signed tarball), so an attacker can't
> > > just change the patch order and can't silently remove a patch and
> > > renumber the later patches without detection. However, it appears to
> > > me that such signatures (at least as currently implemented) cannot
> > > detect the malicious substitution of whole signed patches (such as
> > > the silent replacement of a previous security fix with a non-fix),
> > > or removal of the "latest" fix before anyone else uses it.
> > 
> > This problem is not specific to arch. It's a fundamental limitation of
> > cryptographic signatures. There is no way that you can ever tell
> > whether you are looking at the latest copy of the tree, or whether
> > you're looking at a snapshot that a hostile interloper took yesterday
> > and has substituted for the new one. I don't believe it is even
> > theoretically possible to solve this problem in any system that is
> > based on signatures.
> Actually, this one is relatively easy to handle.  Here's one
> approach that I think would work.  Basically, you
> have a separate signature for the "chain".  Each entry has, as well as
> a hash, a "cumulative hash" - a hash of (this-hash + 
> previous-cumulative-hash).
> Now, sign both hashes.

Here, have yesterday's hash, from before the security fix went
in. Verify it. It's valid, so the archive is "safe".


No amount of signatures can prove that you're looking at the most
recent version.

> > You've picked up the random noise about MD5 being weaker than SHA-1.
> There's already been an old published paper showing very significant
> weaknesses in MD5 -- not enough to break it completely, but enough
> to be worried.

That's a myth. The paper it is derived from is indeed very old (1996),
and demonstrated an attack against one component of MD5 (the
compression function), which the author could not generalise to MD5 in
full. In the intervening years, nobody else has been able to do so
either. Also, the collision only works when hashing certain kinds of
data (the paper used 64 bytes), with a given seed value, and appears
computationally infeasible for large sequences. That is not a
"significant weakness in MD5".

MD5 has withstood cryptanalysis *with this partial weakness known* for
nearly as long as SHA-1 has existed (1995). That says something fairly
significant about how strong MD5 still is.

People have been making doomsday predictions about MD5 ever since
then, and none of them have come to pass.

> Since then, there have been persistent rumors
> inside the crypto community that the algorithm has been broken, and
> those rumors are hard to ignore because MD5 was hanging by a thread anyway.

Yes, that's the classical "terrorrist" attack against cryptographic
algorithms. Spread rumours that they're broken until people are too
scared to use them, and you've effectively defeated them. Then you can
get them to move to another algorithm which you have already broken.

Trusting rumours is really dumb. Trusting the NSA (who created SHA,
and have concealed methods for attacking cryptographic algorithms in
the past) is not very bright, either.

The existence of these rumours for so long without any real evidence
coming to light is a strong indication that they're just baseless

> > Nothing to do with arch, but Haskell isn't slow. It's one of the
> > fastest compiled languages around; the ghc optimiser is probably the
> > single most effective optimiser out of *any* language (Haskell is
> > particularly easy to optimise well).
> Color me skeptical.  Haskell uses lots of optimizations, because truly
> functional programming languages require lots of optimizations to be
> even marginally useful.  I'd like to see the evidence on real-scale
> programs (ones that include I/O, etc.).  Actually, I'd be delighted if
> that's the case, can you point me to some?  It's been a while since I've
> looked at FP, and I'm certainly willing to believe things have
> changed since then. references a few papers from
1995 (unfortunately the links are broken at present). This isn't

I'm not sure where you got "truly functional programming languages
require lots of optimizations to be even marginally useful" from. Pure
functional programming languages impose very little overhead. There
aren't many around, though.

You're unlikely to find much by way of programs that use IO on any
kind of large scale in Haskell because, while the IO code compiles
efficiently, it is not convinient to write.

  .''`.  ** Debian GNU/Linux ** | Andrew Suffield
 : :' : |
 `. `'                          |
   `-             -><-          |

Attachment: signature.asc
Description: Digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]