[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Emacs-diffs] master db828f6: Don't rely on defaults in decoding UTF

From: Chad Brown
Subject: Re: [Emacs-diffs] master db828f6: Don't rely on defaults in decoding UTF-8 encoded Lisp files
Date: Sun, 27 Sep 2015 12:52:15 -0700

> On 27 Sep 2015, at 11:41, Eli Zaretskii <address@hidden> wrote:
>> From: Chad Brown <address@hidden>
>> Date: Sun, 27 Sep 2015 09:03:54 -0700
>> Cc: address@hidden
>>  -finput-charset=charset
>>  Set the input character set, used for translation from the character
>>  set of the input file to the source character set used by GCC. If
>>  the locale does not specify, or GCC cannot get this information
>>  from the locale, the default is UTF-8. This can be overridden by
>>  either the locale or this command line option. Currently the command
>>  line option takes precedence if there's a conflict. charset can be
>>  any encoding supported by the system's iconv library routine.
> Note the "if the locale does not specify" clause.  That should almost
> never happen.

Sure. I almost mentioned that, but at the time it seemed clear
to me that we were talking about the defaults for each. I used to
deal with this issue ‘back in the day’, so it provoked my curiosity 
enough to look. Roughly speaking, the modern ‘programming
languages’ these days are UTF-8, while a decent chunk of the 
‘scripting languages’ seem to be in a messier state, but with 
established methods (coding cookies, odd quoting, ascii by fiat, 
try not to look at comments, etc).

Since then, exchanges on this thread have suggested that maybe I
was wrong about the topic at hand, but the data still seemed useful,
so I pushed it along, with the full quote for context. Sorry if it caused


reply via email to

[Prev in Thread] Current Thread [Next in Thread]