[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: detect-coding-string doesn't return all possibilities

From: Jesper Harder
Subject: Re: detect-coding-string doesn't return all possibilities
Date: Fri, 14 Mar 2003 05:34:32 +0100
User-agent: Gnus/5.090016 (Oort Gnus v0.16) Emacs/21.3.50 (gnu/linux)

Kenichi Handa <address@hidden> writes:

> detect-coding-string doesn't return all possible coding systems, but
> returns a possible coding systems Emacs may automatically detect in
> the current language environment.

Ah, I see.  

Do you know of any other way to decide if using a given coding system
for decoding a string would give a valid result?

A function similar to this would be really useful:

(defun possible-coding-system-for-string-p (str coding-system)
  "Return t if CODING-SYSTEM is a possible coding system for decoding

The issue comes from a discussion on the Gnus development list (I've
included one of the messages from that thread below).

Gnus does not work very well when using CVS Emacs in an UTF-8 locale,
because a lot of non-MIME capable clients don't include proper charset
information.  This causes Gnus to decode many Latin-1 strings as UTF-8.

It would help a lot if we could detect that a string cannot possibly be
encoded in UTF-8.  I know that it's not always possible to distinguish,
but just detecting strings that are invalid as UTF-8 would be very
helpful.  This doesn't just apply to UTF-8 but to any coding system, of

> But the docstring of detect-coding-system is surely not
> good.  I've just changed the first paragraph as this.  How
> is it?

It's good.

--- Begin Message --- Subject: Re: charset=macintosh Date: Sun, 09 Mar 2003 04:56:44 +0100 User-agent: Gnus/5.090016 (Oort Gnus v0.16) Emacs/21.3.50 (gnu/linux)
Simon Josefsson <address@hidden> writes:

> But what if you are saying about UTF-8 clients being MIME capable is
> true, and since UTF-8 is typically never preferred by current emacsen,
> doesn't emacs' current guessing works the best we can hope for?
> Doesn't it detect among ISO-8859-X, ISO-2022 and Big5 properly?

No.  I was hoping we could do something like this (for headers):

(let ((coding-systems (detect-coding-string string)))
  (if (memq default coding-systems)
      (decode-coding-string string default)
    (decode-coding-string string (car coding-systems))))

i.e. if the default coding system is valid for the string, then use
that; otherwise use whatever Emacs thinks is the most likely coding
system.  I think this would be ideal.

But unfortunately `detect-coding-string' _doesn't_ return a complete
list of possible coding systems.  Consider this scenario: 

  I'm using Emacs in a Latin-1 locale.  dk.* newsgroups work fine
  because latin-1 is the default.  But I also subscribe to, say, a few
  Korean newsgroups.  The entry in `gnus-groups-charset-alist':

         ("\\(^\\|:\\)han\\>" euc-kr)

  should take care of selecting the proper default charset.  But *oops*,
  `detect-coding-string' doesn't think that euc-kr is a possible charset
  for a Korean string encoded in euc-kr:

       (detect-coding-string (encode-coding-string "안녕" 'euc-kr))
       => (iso-latin-1 iso-latin-1 raw-text japanese-shift-jis 
           chinese-big5 no-conversion)

So the above approach would fail.

>   2) Users with emacs in UTF-8 prefers UTF-8 too often, even when the
>      data is invalid UTF-8 and another encoding should be selected.
> The second situation is a bug, and I hope we can fix this.

Yep, 2) is the most serious problem.  Especially because more and more
people are (often unknowingly) using an UTF-8 locale because Redhat 8
switched to UTF-8 by default.  Those people would experience Gnus as
broken when reading hierarchies like dk.* or de.*.

--- End Message ---

reply via email to

[Prev in Thread] Current Thread [Next in Thread]