axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] Desired functionality from noweb


From: Ralf Hemmecke
Subject: Re: [Axiom-developer] Desired functionality from noweb
Date: Mon, 07 May 2007 10:44:38 +0200
User-agent: Thunderbird 2.0.0.0 (X11/20070326)

-n      does not produce a latex wrapper (ALLPROSE provides a
wrapper.)

OK.  That should be doable by locating the \begin{document} tag,
although we lose any special \usepackage commands if we do...

In ALLPROSE I follow the idea that there are projects. A project is roughly something that you put together into a library. The project is considered as a whole (although it might consist of several files). A library should cover some area of mathematics. It is perhaps that what Tim had in mind when he wanted to put everything into just one pamphlet. I simply like the approach more if there are several files possible.
Each project allows a .sty file where you can put any additional packages.

Sure, there might be problems with incompatible latex packages when one wants to produce a document that contains every such project. The combine latex package does not really do what you think. After reading the first document it redefines \usepackage to do nothing. So one cannot load packages later.

I rather think that it still would be possible to produce crosslinks over .dvi file boundaries. (Although I haven't yet produced any proof of concept.) That would be enough for me. If there is need to produce a 5000 pages book that should certainly be doable somehow, but I will not invest much time into that idea.

-index  produces index information of identifiers

OK.

Although deprecated by Norman Ramsey, I use the

<<chunkname>>=
...
@ %def Identifier1 Identifier2 ...

a lot to get hyperlinks inside code chunks. (For .as files, this %def
information is autogenerated by ALLPROSE scripts.)

In the case of Lisp, I"m not quite sure how to do this in general. MOST of that information can probably be generated, but Lisp macros
would cause some difficulties in that department.

Also in Aldor it is not easy. If one wants to generate hyperlinks without compiling the code, one has to follow some simple conventions so that a simple parser could find the relevant parts that should be turned into hyperlinks.

My way to use noweave is:

   1. concatenate all .nw files
   2. run noweave -n -index on the concatenated file
   3. split the output into .tex files corresponding to the
      original .nw files
   4. Use a wrapper (which looks approximately like)
      \documentclass{article}
      \usepackage{allprose}
      \begin{document}
      \inputAllTexFiles
      \end{document}
   5. latex/pdflatex/htlatex that wrapper.

OK.  So when you write .nw files originally they contain no header
information at all.  I think this gets back to the original discussion
on how to handle different pamphlets needing different style files
(sistyle for units, for example.)  I'll need to ponder this one some
more and see if I can find the package or two designed to handle such
situations...

But the "units" stuff would form a project in the above mentioned sense. So you are free to add any package you like.

You find the full story on the website

http://www.hemmecke.de/aldor/allprose/myalpssu62.html#noweb.NWGFlOZ-RntG3-1

Out of curiosity Ralf, have you ever benchmarked ALLPROSE?  How long
would it take do you think to process something really large?

Why should I care? The only thing one has to translate is *one* project which should have reasonable size. If it works as I think, then there would be some kind of .aux files (of other project) with all the information in them to produce hyperlinks into earlier projects in the hierarchy. If it takes one day to produce a 5000 pages book, why would that be a problem? How often do you think such a compilation is started?

You probably find ALLPROSE slow (I haven't really invested much time optimizing it anyway), but for me functionality was/is more important than speed. I want to develop code+documentation and don't need to produce a .dvi file every minute. Every 15 min should do. You know yourself when you have modified big junks of text so that it would be wise to recompile. It is a bad idea to re-latex in the middle of writing since there are good chances that you get errors, because you have an unfinished environment.

Note that I don't think that everything should go into one big
pamphlet file. I rather like to edit several files which finally
produce ONE document.

I tend to think of it as one pamphlet = 1 "concept", and then pamphlets
would be bundled like conference proceedings to make larger volumes. It's the combining that makes it interesting.

But why do you think a "proceedings" form is the first thing we should think of. If everything is put into html form on a website what disadvantage would that have?

I think also the little arrows at the top of each code junk are nice,
in particular if a code junk continues at some other place. You then
see a

   +\equiv

at the top of the code junk and can click through the code chunks
that belong together.

In essence, links that move the reader through the document in the
order in which the machine would see the code? That's not a bad idea. Hmm...

But that is only for chunks that have the same name. They are combined together in the order they apper in a .pamphlet. The rest appears as a hierarchy. You can click on the chunk names which leads you to the first part of its definition.

And if you look at the index you will find red and blue entries. I
have added a TeX command to modify the noweb.sty so that definitions
are shown in red in the index. (Style of course adjustable)

I'm still a bit unsure of the viability of the language aware part of
the noweave process when it comes to Lisp, and done right it will
greatly increase the challenge of programming all of this.  I think the
right approach here will be to start small and scale up.

I don't use language awareness in ALLPROSE. Aldor is not build-in into noweb anyway and I did not know how to write an appropriate noweb filter to add language support for Aldor, so my filters start before noweave sees the file. Anyway, if we start putting several different languages into one pamphlet, it will be difficult to guess the language if there is no explicit tag.


What I don't like at all with noweb is that one gets a \par after the
end of a code chunk if the @ is not followed by a space and %. In
noweb the Text can be continued immediately after the ending @\space,

but for me that looks terrible. I want to see in the latex source
were a code junk ends. A single @ doesn't catch the eye so quickly.

I always assumed that the working literate programming style wouldn't
have code actually inline with text - are you saying you DO want to use
that style and don't want the \par command?

Well, let's not discuss it here. That is not a big issue. It simply says that after a code chunk there always starts a new paragraph unless you end the code chunk via

@ %
New text starts here and is not indented

or

@ New text starts here and is not indented

Just as a point of possible interest, there does exist this system:
http://albert.sourceforge.net/ which may have some features worth
studying when it comes to dealing with lisp code.  Unfortunately it's
GPL, so we probably wouldn't want to use it directly.  (I don't think
we could anyway, probably, but I believe it has some "who-calls"
scanning abilities that would at least be a useful starting point.)

I don't care much about LISP. I want to be language independent. And perhaps add some features to support programming in Aldor. LISP is an assembly language for me.

Perhaps we could approach it in this fashion - have the scrips needed
to generate your advanced output be the default, and if testing for
needed machinery fails fall back onto the simpler Lisp+vanilla LaTeX
solution.  Over time, we could migrate features into the Lisp solution
until we can reproduce everything we need.

Note that TeX can already do a lot with respect to hyperlinks. On just has to put appropriate tags into the source. Look at what I have done in your cl-web pamphlet.

Very important for me is that the pamphlet -> latex routine respects line numbers as noweave does. Every documentation chunk should be wrapped by something like \begin{docchunk} ... \end{docchunk} and every code chunk by \begin{codechunk}{name} ... \end{codechunk} (or something similar. If LaTeX sees that and has an appropriate .sty file (like noweb.sty) then pretty much can be done by TeX itself. Appropriate tags are important.

Ralf





reply via email to

[Prev in Thread] Current Thread [Next in Thread]