cons-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Request for comments: CONS specification


From: Søren Mou Jakobsen
Subject: RE: Request for comments: CONS specification
Date: Sun, 30 May 2004 21:22:36 +0200

Hi all

 

I have used cons in various employments through the last four years. Developing a new extensible version of cons is a good idea. In fact that has been proposed several times on this mailing list, but no project has taken of yet. I’m willing to pitch in to get things moving.

 

I have recently implemented an alternative to cons called qs (named so because it’s quick to type). Qs was developed to counter some of the deficiencies discussed recently, and we’re currently migrating to this at our current workplace. We might be able to use some of the ideas and possibly code behind qs to get things accelerated. Currently qs is strictly a core build engine. It has a very simply interface to build up a dependency graph and then make a number of targets up-to-date. It doesn’t know anything about file types, how to build certain things (like TeX file), etc. This is something that could be built on top of qs.

 

From an architectural point of view, I believe it makes sense to think of the new CONS as at least two layers: A core build engine which should be as simple and optimized as possible and modules on top of the build engine which contains the “expert” systems which knows how to build different targets. The expert modules will then build up the dependency graph though a simple interface into the core build engine. What I’m proposing is to migrate qs into the new cons as the core build engine. This architectural division will also ensure that users will always be able to bypass the upper layer to do whatever they want.

 

The current version of qs is about 1100 lines of perl. It’s well documented, object oriented and supports parallel builds (with a caveat). The basic interface to qs is very simple. It got two functions: Qs::node which defines a new node in the build graph and Qs::edge which add one or more dependencies between nodes and a command to make the source nodes from the target nodes.

 

Qs supports the plug-in of different types of nodes. Each inherits from an abstract Node class. This means that the dependency graph can depend on regular file nodes, files in cvs, perl references, rows in a database, etc. Edges are also plug-in objects which inherits from an abstract Edge class. Obviously the most used edge class is the Cmd edge which executes a program in the shell. Also supported could be edges which build by calling a reference to a perl sub function, edges which execute sql queries or runs heavy-duty processing on build servers, etc.

 

Qs support parallel building, but it doesn’t work at the moment because it turned out that mutex protection of perl objects is currently unsupported. The documentation says object mutexes are work in progress. There are properly also some alternatives to implementing this. The main point is that qs were designed with parallelization in mind.

 

In a previous mail the problem of not knowing the exact dependencies in a build tree was discussed in relation to building TeX documents. I’ve had major problems with this particular problem. I work at a company which created application for interactive television. The build process is usually very complicated and can contain up to ten build steps from sources to final target. We generate a lot of C code and headers during the build process. Those files can’t be dependency scanned when a project is built from scratch. Also when a project is built the second time the dependency scan is sometimes wrong because the previous versions of C and header files are scanned. It was suggested in a mail to add the ability to modify the dependency graph during the build process. I believe this is a good solution. Qs supports this by a late-dependency-scan callback function. This example of a C file compilation illustrates how it works:

 

Qs::edge FileNode(‘foo’), FileNode(‘foo.c’), ‘gcc …’; # foo is built from foo.c

Qs::scan FileNode(‘foo’), \&c_dependency_scan; # asks qs to call c_dependecy_scan right before it’s about to build foo

 

sub c_dependency_scan

{

  my $source = shift;

  my $target = shift

 

  my @dependencies = … do actual dependency scan of source …

 

  # modify the dependency graph during the build process! The dependencies will be made up-to-date before foo is built

  Qs::edge $target, @dependencies;

}

 

As a proof of concept for qs I made also made some node and edges classes which provides a back wards compatibility with cons. This means that good old construct files were parsed and a dependency graph was build up using the qs interface. It might be interesting to complete this so that the new cons is actually contains an export builder which gives back wards compatibility with old project. This could ease the transition from old cons to new cons.

 

Another area I would like to improve is the debugging of the dependency graph. This can be hard with cons if the dependency graph is very large and complex. One thing qs supports now is to output the dependency graph as HTML. All sources are targets are hyperlinked so that the user can easily click through dependencies. It would also be possible to use the perl module which outputs graph to see a visual presentation of (subsets of) the dependency graph.

 

It was also suggested earlier to add native support for build clustering. That would be a very cool feature.

 

Kind regards,

Søren Mou Jakobsen

 

 

 

 

 


reply via email to

[Prev in Thread] Current Thread [Next in Thread]