[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] Re: Working out a branching scheme [was: tag --seal

From: Tom Lord
Subject: Re: [Gnu-arch-users] Re: Working out a branching scheme [was: tag --seal --fix]
Date: Sat, 3 Apr 2004 07:59:17 -0800 (PST)

    > From: Stefan Monnier <address@hidden>

    >> A total history containing 11K revisions is no sweat for arch.  A
    >> total history containing 100K revisions is no sweat.  Putting all
    >> those revisions in a single arch version?  You're moving into the area
    >> of using arch poorly.

    > Is there a fundamental reason why this is poor use?

There's fundamental reasons why it's poor use if all those changes
take place over a short-enough time period (say, a year or 16 months)
and if the tree is "one big thing" rather than something that could
reasonably be broken-up into sub-categories.  Basically you have some
tree that's changing at that rate and, at that rate, nobody working on
or with the tree can really keep up with the changes.   It's the "dog
pile on the mainline" technique.   The fundamental problems here are
from the user perspective -- you're not using the tool to do something

There's a user-fundamental reason why it's poor idea if the project
can reasonably be split-up into sub-categories or if all these
revisions take place over many years.   Do you really want "tla
revisions -s" (for example) to list 11k commits?   Is that a useful
common case?   Splitting it up makes a more useful use of the

Finally, there's a tool-design fundamental reason why you don't want
things that big.  Sure, 100k same-version-commits can be done, but
only at the cost of a big leap in implementation complexity.   If
there's no compelling use-case reason to want this -- why pay the
costs associated with writing, maintaining, and administrating a more
complex tool, especially when such a simple alternative is achievable?

    > It seems like at least commit should not care about how many revs there 
    > since it should only care about the last few ones.

There's been much confusion on the list about that issue.   `commit'
basically _doesn't_ care how many revs there are.   It trivially does
because of a readdir that abently has patched away but besides that it
doesn't.   Not directly, anyway.   Of course, keeping 100K
_patch_log_entries_ is expensive no matter how you do it.

    >> 11k revisions in a year comes out to a bit more than one per hour, 24
    >> hours a day, 365.25 days.

    > What's up with years?  I don't think of my development in terms of years
    > at all.  It seems completely arbitrary, just designed to work around
    > tla's limitations.

It's not years that are interesting, its rates: commits/time and
archive-cycle/time.   If the rate of same-version-commits is too high
you have to wonder if anything useful is being accomplished by using
revision control.   If the archive-cycle rate is low, then who cares
about it?

Arch scales easily to infinite linear revisions if the archive-cycle
rate is high enough relative to the same-version-commit rate.   The
scalability problem is therefore to figure out how much to maximize
the achievable same-version-commit rate and minimize the archive-cycle
rate while not hitting performance problems.

The problem is _not_ unique to arch.  CVS has _exactly_ the same
problem -- but the numbers come out differently.

For that matter, SVN has the same problem too -- only they made
matters worse.   In the name of "scaling", they wound up using BDB in
the back-end, at least for now.    So they have a third rate that
relates to the commit rate (the archive-wide commit rate, actually):
the log-growth rate.

    >> No development process that is doing that in a single arch version is
    >> doing anything useful.  (A process doing that across a few (coallescing)
    >> branches, on the other hand, is entirely realistic.)

    > No, it might very well be all linear progress with no big milestone of
    > any kind.  The eXtreme Programming style for example recommands an
    > evolutionary approach to software and some people (myself included) tend 
    > work in such a way that changes are brought in progressively such that 
    > step can be checked independently by third parties.

I think an XP debate is straying off-topic but, if you're using XP:

Small commit rate but huge total number of revisions?   Ok, we're
talking about a very long period of time.   Tossing in some archive
cycles along those years is hardly going to hurt you.

High commit rate and total number of revisions?  You're evidently not
following XP practices wrt. testing your mainline and it's highly
unlikely (at least if you're using CVS) that you're maintaining an
accurate feedback cycle between the mainline and what the programmers
are working against as baseline.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]