axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] Re: Pamphlets and LaTex


From: C Y
Subject: Re: [Axiom-developer] Re: Pamphlets and LaTex
Date: Tue, 17 Jul 2007 20:49:12 -0700 (PDT)

--- Stephen Wilson <address@hidden> wrote:

> What if \chunk{foo} resulted in referencing the Foo domain defined in
> another file?

I personally didn't consider it desirable to reference chunks outside
of a particular file - it makes it that much harder to understand.  If
the idea is to tangle something, I don't really like the idea of
tangling a source file out of multiple pamphlets.  In my opinion that
level of complexity is always going to cause more difficulties than any
advantages it might give.  I would be interested in use cases that can
demonstrate the benefits of such an approach - just because I can't
think of any certainly doesn't mean there aren't any.

> Its much harder to transform that state machine into one which can
> deal with the reorganization of chunk structure.  For example, noweb
> markup like:
> 
>     @
>       Introduction.
>     <<chunk1>>=
>         (code here)
>     <<chunk2>>=
>         (more code)
>     @
>        Some documentation
>     <<chunk3>>=
>          (etc...)
>      EOF

Erm.  Do we want to support a style like that?  My first thought upon
seeing that would be to think "badly formatted, hard to understand."
 
> The state machine hard wires code blocks:
> 
>       <<chunk>>=
>           (code)
>       @

Yes.  Personally I consider this a Good Thing - the visual signature of
a chunk is always consistent and unique.  This makes for easier
pamphlet editing, in my opinion - the above example would have me
wondering if I was in chunk1 or chunk 2.

> State machines are fine if your modeling transistor networks.  Not
> when parsing dirt simple noweb-like markup, IMHO.

Probably the Best Way to handle this would be to use some sort of BNF
parser generator and define the pamphlet grammar - then let the parser
generator write us a parser.  That would of course lose us some speed,
but if we're opening up pamphlet syntax I would say the tradeoff would
be well worth it.  Making a literate BNF tool might also be useful down
the line for other applications in Axiom.

Of course, the speed request didn't originally come from me - Tim? 
What would your take on this issue be?

> Yes.  I am aware for that.  Note that you need cooperation from the
> Lisp compiler.  Again, another non-latex component to the process.
> Another reason not to have pamphlet markup masquerade as latex.

Sure :-).  But if you don't like the LaTeX syntax, just weave it into
something else ;-).
 
> > 3.  Is the intent still to move to the cl-web code once we get to
> > ANSI, or is gclweb the direction of the future?  If the latter I
> > need to take a closer look at that codebase.
> 
> I have some issues with the code in cl-web, as noted above,
> unfortunately. (again, please dont take this personally!  These are
> just dry technical observations!)

Of course - not a problem :-).  My only concern is that the
documentation of cl-web was a non-trivial exercise (for me, at least
:-/) and if we are going to have to re-do this again I want to make
sure the direction chosen is the final direction. 

> > 4.  Do I understand correctly that a design goal here is to add
> > commands to the pamphlet files that will not be tangled but instead
> > executed in an Axiom environment?  There are two ways to interpret
> > that:
> > 
> >    a.  Provide a special LaTeX style that is aware of
> SPAD/interpreter
> > language syntax and typesets, but don't execute the code unless the
> > file is specifically processed with that intent (tangle, weave, and
> > run-spad?)
> 
> The correct solution is to have a pamphlet command which can either:
>    - executes the code and generates the appropriate latex as output
>    - or replace the command with a simple latex expression without
>      doing any on-the fly evaluation.
> 
> Either behavior, and perhaps others, can be specified to weave the
> document.

Steve, are you thinking to have chunks identify their language to the
scanner?  Otherwise I'm not sure how the scanner will distinguish
between input, lisp, spad, etc. in the chunk and know to evaluate only
the input expressions.

> >    b.  Process the commands during the LaTeX process and
> incorporate
> > the results of Axiom's evaluation automatically into the document. 
> > This is a bit more like the EMaxima functionality.  This is useful
> > in
> > some situations but I am not sure we need to be worrying about it
> at this stage.
> 
> This is not enough.  Axiom should be able to read pamphlets, and have
> them exist as live objects in the system.  Should we actually require
> that axiom write a file and call LaTeX to call itself in order to
> weave a document?

I'm not sure what you mean by "live object".  I was viewing it as
follows:

1. .lisp.pamphlet and .spad.pamphlet files are loaded just as
functionality is loaded now from these files.

2. Axiom IO sessions inside LaTeX documents are typeset in a way
similar to the way EMaxima handles things.  There could perhaps be an
"evaluate all axiom expressions in this latex file" command or some
such, but I'm not quite clear how this would make the document "live".

> The Crystal, where we can view an object through a multitude of
> facets.  We can see the API, the theorems and proofs, the category
> hierarchy, the dependents and dependees, etc, etc...

Perhaps how we view these things is relevant to this.  For myself, most
of these things are deduced and documents describing them can be
generated "on the fly" with specific commands inside Axiom -
GenerateProof(a=b,COQ) or CategoryTree(PositiveInteger) or )show api
integrate or something like that.  I hadn't viewed the tangle/weave
part of the system as directly having anything to do with it.

> All these things
> have been discussed before, there are many other options.  I mention
> these in particular because they are all candidates for generating
> LaTeX output, but would ultimately be inferred from a set of
> pamphlets.  You cant just run LaTeX on a pamphlet and obtain that
> result, because pamphlets are not LaTeX.

Correct.  I would expect to do some interactive work to obtain those
documents or at the very least run some special tools (not tangle or
weave as such, maybe (prove-all "file.input.pamphlet") or some other
command that builds on top of the basic scanner ability.)

> Whats wrong with associating a documentation chunk as the formal
> proof of a particular code chunks correctness?

My current (half formed) thought on that issue was to have
pseudo-databases of the proofs themselves and get them at need (for
generating an actual proof, or for insertion into some document).  I
didn't figure to have every chunk and its correctness proof next to
each other in the documents, simply because of the likely size of many
of them (check out metamath).

> Or as the API doc which gets displayed when you pass the wrong type
> to factorGroebnerBasis?

I thought API docs were very specificly terse, syntax focused messages
that would have to be written with the idea of appearing as "error
messages" rather than the more verbose research-paper style
explanations.  (I'm probably missing something again, sorry...)
 
> > Am I missing something here?  Help!
> 
> I hope that helps clarify my position a bit.  Lets keep discussing
> this as I think it is of hugely important for Axiom!

It sounds like the design decision here is how much responsibility for
processing do we specifically want to put into the weave tools.  My
thought would be minimalist - strictly enforce the chunk style, and
prepare additional tools/logic for operations beyond basic code
extraction and LaTeX translation.  Which is not to say powerful tools
aren't good, but I think they should either be inside Axiom or built as
discrete functionality on top of the basic chunk style.

What about this - we could add an optional options container to the
chunk style - e.g.

<<chunkname>>=[options]

@

where things in the options container are simply stored in the chunk
structure in the scan.  Then, if the scan function is being called by a
command other than the basic tangle and weave, the callee could look
for options pertaining to its particular function.

Cheers,
CY


 
____________________________________________________________________________________
It's here! Your new message!  
Get new email alerts with the free Yahoo! Toolbar.
http://tools.search.yahoo.com/toolbar/features/mail/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]