[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: decode-coding-string gone awry?

From: Kenichi Handa
Subject: Re: decode-coding-string gone awry?
Date: Mon, 21 Feb 2005 10:19:53 +0900 (JST)
User-agent: SEMI/1.14.3 (Ushinoya) FLIM/1.14.2 (Yagi-Nishiguchi) APEL/10.2 Emacs/21.3.50 (sparc-sun-solaris2.6) MULE/5.0 (SAKAKI)

In article <address@hidden>, Richard Stallman <address@hidden> writes:

>       I think it should not be considered valid to decode a multibyte string,
>       whether the string happens to only contains ASCII (or ASCII+eight-bit-*)
>       or not.

>     But what would it mean, in the other cases?

> I see I misread the message the first time--I didn't see the "not".
> Now that I see it, I think maybe I agree.

> If you have a multibyte string that makes sense to decode, and you
> want to decode it, you could call string-as-unibyte first.  That would
> be a way of overriding the error-check.  It would not be hard to do,
> and it would prevent people from falling into problems that are
> mysterious because they don't know that the program decodes multibyte
> strings.

The source of the current problem is not that the code was
going to decode a multibyte string, but the code generated
an unexpected multibyte string (because of the mysterious
unibyte->multibyte automatic conversion).

As it has been a valid operation to decode an ascii and
eight-bit-* only multibyte string, I believe signalling an
error on it causes lots of problems.  On the other hand,
signalling an error only if the string contains a non-ASCII
non-eight-bit-* character will be good.

As you wrote, the slowdown by checking it in advance will be
acceptable in the case of using decode-coding-string.

Ken'ichi HANDA

reply via email to

[Prev in Thread] Current Thread [Next in Thread]