emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fwd: Re: Inadequate documentation of silly characters on screen.


From: Alan Mackenzie
Subject: Re: Fwd: Re: Inadequate documentation of silly characters on screen.
Date: Thu, 19 Nov 2009 21:25:50 +0000
User-agent: Mutt/1.5.9i

Hi, Davis, always good to hear from you!

On Thu, Nov 19, 2009 at 11:25:05AM -0800, Davis Herring wrote:
> [I end up having to say the same thing several times here; I thought it
> preferable to omitting any of Alan's questions or any aspect of the
> problem.  It's not meant to be a rant.]

> > No, you (all of you) are missing the point.  That point is that if an
> > Emacs Lisp hacker writes "?ñ", it should work, regardless of
> > what "codepoint" it has, what "bytes" represent it, whether those
> > "bytes" are coded with a different codepoint, or what have you.  All of
> > that stuff is uninteresting.  If it gets interesting, like now, it is
> > because it is buggy.

> When you wrote ?ñ, it did work -- that character has the Unicode (and
> Emacs 23) code point 241, so that two-character token is entirely
> equivalent to the token "241" in Emacs source.  (This is independent of
> the encoding of the source file: the same two characters might be
> represented by many different octet sequences in the source file, but you
> always get 241 as the value (which is a code point and is distinct from
> octet sequences anyway).)

OK - so what's happening is that ?ñ is unambiguously 241.  But Emacs
cannot say whether that is unibyte 241 or multibyte 241, which it encodes
as 4194289.  Despite not knowing, Emacs is determined never to confuse a
4194289 type of 241 with a 241 type of 241.  So, despite the fact that
the character 4194289 probably originated as a unibyte ?ñ, it prints it
uglily on the screen as "\361".

> But you didn't insert that object!  You forced it into a (perhaps
> surprisingly: unibyte) string, which interpreted its argument (the integer
> 241) as a raw byte value, because that's what unibyte strings contain. 
> When you then inserted the string, Emacs transformed it into a (somewhat
> artificial) character whose meaning is "this was really the byte 241,
> which, since it corresponds to no UTF-8 character, must merely be
> reproduced literally on disk" and whose Emacs code point is 4194289. 
> (That integer looks like it could be derived from 241 by sign-extension
> for the convenience of Emacs hackers; the connection is unimportant to the
> user.)

Why couldn't Emacs have simply displayed the character as "ñ"?  Why does
it have to enforce its internal dirty linen on an unsuspecting hacker?

> > OK.  Surely displaying it as "\361" is a bug?  Should it not display
> > as "\17777761".  If it did, it would have saved half of my ranting.

> No: characters are displayed according to their meaning, not their
> internal code point.  As it happens, this character's whole meaning is
> "the byte #o361", so that's what's displayed.

That meaning is an artificial one imposed by Emacs itself.  Is there any
pressing reason to distinguish 4194289 from 241 when displaying them as
characters on a screen?

> > So, how did the character "ñ" get turned into the illegal byte #xf1?
> > Is that the bug?

> By its use in `aset' in a unibyte context (determined entirely by the
> target string).

> >> You assume that ?ñ is a character.

> > I do indeed.  It is self evident.

> Its characterness is determined by context, because (as you know) Emacs
> has no distinct character type.  So, in the isolation of English prose, we
> have no way of telling whether ?ñ "is" a character or an integer, any more
> than we can guess about 241.  (We can guess about the writer's desires,
> but not about the real effects.)

> > Now, would you too please just agree that when I execute the three
> > forms above, and "ñ" should appear?

> That's Stefan's point: should common string literals generate multibyte
> strings (so as to change the meaning, not of the string, but of `aset',
> to what you want)?

Lisp is a high level language.  It should do the Right Thing in its
representation of low level concepts, and shouldn't bug its users with
these things.

The situation is like having a text document with some characters in
ISO-8559-1 and some in UTF-8.  Chaos.  I stick with one of these
character sets for my personal stuff.

> Maybe: one could also address the issue by disallowing `aset' on
> unibyte strings (or strings entirely) and introducing `aset-unibyte'
> (and perhaps `aset-multibyte') so that the argument interpretation (and
> the O(n) nature of the latter) would be made clear to the programmer.

No.  The problem should be solved by deciding on one single character
set visible to lisp hackers, and sticking to it rigidly.  At least,
that's my humble opinion as one of the Emacs hackers least well informed
on the matter.  ;-(

> Maybe the doc-string for `aset' should just bear a really loud warning.

Yes.  But it's not really `aset' which is the liability.  It's "?".

> It bears more consideration than merely "yes" to your question, as
> reasonable as it seems.

> > What is the correct Emacs internal representation for "ñ" and "ä"?  They
> > surely cannot share internal representations with other
> > (non-)characters?

> They have the unique internal representation as (mostly) Unicode code
> points (integers) 241 and 228, which happen to be identical to the
> representations of bytes of those values (which interpretation prevails in
> a unibyte context).

Sorry, what the heck is "the byte with value 241"?  Does this concept
have any meaning, any utility beyond the machiavellian one of confusing
me?  How would one use "the byte with value 241", and why does it need to
be kept distinct from "ñ"?

> Davis

-- 
Alan Mackenzie (Nuremberg, Germany).




reply via email to

[Prev in Thread] Current Thread [Next in Thread]