[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: decode-coding-string gone awry?

From: Kenichi Handa
Subject: Re: decode-coding-string gone awry?
Date: Fri, 18 Feb 2005 17:30:20 +0900 (JST)
User-agent: SEMI/1.14.3 (Ushinoya) FLIM/1.14.2 (Yagi-Nishiguchi) APEL/10.2 Emacs/21.3.50 (sparc-sun-solaris2.6) MULE/5.0 (SAKAKI)

In article <address@hidden>, Stefan Monnier <address@hidden> writes:

>>  Even if size_byte == size, it may contain eight-bit-graphic
>>  characters, and decoding such a string is a valid operation.
>>  And even if size_byte > size, it may contain only ASCII,
>>  eight-bit-graphic, and eight-bit-control charactes.  It's
>>  also a valid operation to decode it.

> I think it should not be considered valid to decode a multibyte string,
> whether the string happens to only contains ASCII (or ASCII+eight-bit-*)
> or not.

But, we allow decode-coding-region in a multibyte buffer.
Then, it's strange not to allow something like this:
  (decode-coding-string (buffer-substring FROM TO) CODING)

>>  It's not a trivial work to change the current code (in coding.c) to signal
>>  an error safely while doing a code conversion.

> If by "safely" you mean "which will not break currently working code",
> I agree.  If by "safely" you mean "which will not break properly written
> code", I disagree.

I mean by "safely" to signal an error only at a safe place,
i.e., the place where we can do a global exit.  For
instance, we can't signal an error in decode_coding_iso2022
because it may be modifying buffer contents directly.

By the way, what do you mean by "properly written code"?

Ken'ichi HANDA

reply via email to

[Prev in Thread] Current Thread [Next in Thread]