[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: decode-coding-string gone awry?

From: Stefan Monnier
Subject: Re: decode-coding-string gone awry?
Date: Mon, 14 Feb 2005 14:30:32 -0500
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/21.3.50 (gnu/linux)

> Give me a clue: what happens if a process inserts stuff with 'raw-text
> encoding into a multibyte buffer?  'raw-text is a reconstructible
> encoding, isn't it, so the stuff will get converted into some prefix
> byte indicating "isolated single-byte entity instead of utf-8 char"
> and the byte itself or something, right?  And decode-encoding-string
> does not want to work on something like that?

If you want accented chars to appear as accented chars in the (process)
buffer (i.e. you don't want to change the AUCTeX part), then raw-text is
not an option anyway.  If you don't mind about accented chars appearing as
\NNN, then you can make the buffer unibyte and use `raw-text' as the
process's output coding-system.  That's the more robust approach.

If that option is out (i.e. you have to use a multibyte buffer), you'll have
to basically recover the original byte-sequence by replacing the

   (regexp-quote (substring string 0 (match-beginning 1)))


   (regexp-quote (encode-coding-string
                  (substring string 0 (match-beginning 1))

[assuming buffer-file-coding-system is the process's output coding-system] or

   (regexp-quote (string-make-unibyte
                  (substring string 0 (match-beginning 1))))

which is basically equivalent except that you lose control over which
coding-system is used.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]