[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Surely 'bzr update' shouldn't be this slow?

From: grischka
Subject: Re: Surely 'bzr update' shouldn't be this slow?
Date: Fri, 08 Jan 2010 00:29:09 +0100
User-agent: Thunderbird (Windows/20090812)

> It IS copying, conceptually

Alan, don't be contrary. Óscar is just telling you the facts, and AIUI *he* would not have chosen bzr if it were up to him. He's just trying to help you and others make the best use of the official VCS.

But Óscar wasn't telling any facts aside from that the issue is maybe
"complex" and "non-trivial".

No, it has to do a lot more than that.

Has to?  Why?

What it is doing is
conceptually most like garbage collection.  The actual content of a
repository is stored as a set of compressed archives of file objects,
revision objects, and patches, plus some indicies.  (These archives
are called "packs.")  The "processing" goes through those objects,
throws out the ones not needed for the branch you request, and
reconstructs minimal packs and indicies for them.

So it's slow because it does lots of slow stuff.  Fine.

Obviously compression is slow, but who says it needs to de/recompress
the objects just to build a new index or to build a new pack with some
but not all of the objects?

Also with http:  You claim it's slow because it needs to download entire
packs.  But AFAIK http supports seeking into file offsets.  Or otherwise
you can store the top of the history near the beginning of the pack such
that you can stop downloading as soon as you have the missing updates.
Or otherwise you could have packs on the server only for commits older
than a year and anything more recent as single files.  It's not that
there weren't alternatives.

--- grischka

reply via email to

[Prev in Thread] Current Thread [Next in Thread]