[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Coding system robustness?

From: David Kastrup
Subject: Coding system robustness?
Date: Fri, 18 Mar 2005 18:45:42 +0100
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.0.50 (gnu/linux)


I'd like to know whether coding systems in general are supposed to be
robust, meaning that decoding some random byte string into the coding
system and reencoding it is guaranteed to deliver the same byte string

Background for that question: I do error association in preview-latex
(via AUCTeX) with the original source text, and generally unrobust
transformations of the input may happen, such as splitting a
multibyte-char in the middle, or translitering some of those chars,
but not others.  I currently work this by having the process use a
raw-text encoding, replace potentially questionable stuff and reencode
when it turns out that the contexts do not match the source file.
This has the disadvantage that

a) I need to go through the works even in case TeX is set up nicely
enough to deliver mostly working characters, since the raw encoding
will match much less often than a properly decoded stream.

b) The displayed output looks like junk unnecessarily.  If we are
talking about multi-file documents written in different encodings,
this problem is not possible to avoid with tolerable effort, but in
the case where the encodings in one document match, it would be nicer
to have AUCTeX have a nicer output buffer.

So what encodings are expected to be "transparent" for what versions
of Emacs (we are only interested in 21.3 and newer)?

David Kastrup, Kriemhildstr. 15, 44793 Bochum

reply via email to

[Prev in Thread] Current Thread [Next in Thread]