bug-texinfo
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Broken link at http://www.gnu.org/software/texinfo/


From: Karl Berry
Subject: Re: Broken link at http://www.gnu.org/software/texinfo/
Date: Mon, 7 Jul 2008 19:58:27 -0500

    I'm not sure it would be a reasonable effort to make a 100 % backward
    compatible reimplemention at first go.  

Yes, I was having some of those same thoughts.

On the other hand, my experience with makeinfo is showing me that
without some consideration of backward compatibility as you go along, it
is too hard to just "tack on" later.  You end up with what makeinfo is
now -- a collection of warts.  Anyway.  This is the kind of thing that
becomes clearer in the actual development than in talking about it,
IMHO.

    a Texinfo parser that builds data trees that represent Texinfo
    documents, to play with for managing translations. Even if I (or we)

Sorry, I don't see how it's relevant to translations.  You're thinking
of some gettext-like thing that operates on a strings in the document?

I was merely imagining a human-written (say) German source document; I
shouldn't have called it a "translation" specifically.  Support just for
that is woefully lacking.  Let alone (say) Chinese or Arabic, even
though all those languages and many more are reasonably well-supported
in TeX and in modern systems in general.

    a parser that can be used in these situations, which is far better than
    using Python regular expressions :-p

If you're talking about XML-ish type trees, there is always makeinfo
--xml, which is basically an XML representation of the Texinfo source.
But you probably already knew that.

    seriously, the main motivation to switch to texi2html is that we want to
    split HTML documentation at arbitrary levels, and at different levels
    depending on each chapter.  

Ah.  That is one area where I do want to improve makeinfo too.

    1) write a parser

Yes.  And this alone is a huge job.  Heck, just *tokenizing* is not
simple.  It cannot be done anything like a traditional tokenizer/parser,
since the tokens vary in context.  For instance, @ is not always an
escape (@verbatim ...).

And there is no way to parse bigger constructs without actually
recognizing/executing some code, because of the verdamnt @macro.

Anyway, yes, what you're outlining is right in line with what I have
been dreaming about.

Thanks,
karl




reply via email to

[Prev in Thread] Current Thread [Next in Thread]