[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gpsd-dev] 24 hours of hell

From: Eric S. Raymond
Subject: Re: [gpsd-dev] 24 hours of hell
Date: Thu, 31 Oct 2013 11:13:40 -0400
User-agent: Mutt/1.5.21 (2010-09-15)

Greg Troxel <address@hidden>:
> It may be time to think about using branches for complicated or
> disruptive changes.  With direct commits to master, there is no way for
> others to review them and comment before they are on master.

I do sometimes use branches locally, mainly to group together changes that
should be exposed to others as a unit rather than exposing a work
in progress.

But they wouldn't have helped in this instance, and I'm not a big fan
of them in general. Having just one public line of development
concentrates the mind.  It also acts as a kind of discipline, a
worthwhile pressure for each commit to make sense not just relative to
others in its hypothetical branch (if development were structured that
way) but in terms of every change that is going on.

My strategy is also influenced by having a test suite that nails down a
lot of invariants.  My development cycle is 1) experiment 2) verify with tests
3) collect your win by publishing; repeat.  What I'm aiming to do is
substitute verification by test for branching. They're both strategies 
for mitigating the same category of risk, and thus in an important sense

To a significant extent I'm even trying to substitute automated
testing for *human review*, in order to reduce the amount of skilled
attention others have to expend on the project.  Because that's the
scarcest resource.

I've written about these ideas in "Risk, Verification, and the
INTERCAL Reconstruction Massacree" at

Of course, this strategy can fail where your tests don't reach, and
it can fail very badly badly if for some reason your tests where you
*think* they have coverage - that's what happened this time.  But every
strategy I could apply has failure modes.  Perfection is not one of
the options.  You have to choose a way to run things that globally 
minimizes your risk relative to competing strategies.

On that metric I still think being test-centered is the right choice.
After all, how often do we have bad patches like the last couple of days?
Years, literally, go by without one.
                <a href="";>Eric S. Raymond</a>

Attachment: signature.asc
Description: Digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]