[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#4047: 23.1.1: hexl-mode doesn't like UTF8 files with a byte-order ma

From: Stefan Monnier
Subject: bug#4047: 23.1.1: hexl-mode doesn't like UTF8 files with a byte-order mark
Date: Mon, 10 Aug 2009 15:45:29 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.1.50 (gnu/linux)

>> Btw, I doubt that any encoding that uses BOM can ever be appropriate
>> for encoding command-line arguments.  Maybe we should treat them
>> specially in call-process and its ilk.
> The bug is that hexlify-buffer assumes that manually encoding the
> command line stops call-process from encoding it again, which does not
> work: coding-system-for-write takes absolute precedence.  IMHO
> call-process should not use coding-system-for-write for encoding the
> command line, if at all there should be a separate override.

I believe we've bumped into this problem already in the past.
To me, it's clear that call-process should be careful about coding
arguments, since the coding-system to use may depend on the argument
and/or the command, so in general the caller will want to specify
explicitly some coding system for the arguments, including a different
coding system for each argument.  An override var might be a good idea,
but it won't cater to the case where each arg requires a different
encoding, so the most important thing is to make sure that unibyte args
don't get re-encoded.

Unless Handa objects, I'd recommend we change encode_coding_string to be
a nop on unibyte strings (tho, we may want to let it obey EOL
conversions).  If there are good reasons not to do that, then
Fcall_process should be changed to not call encode_coding_string on
unibyte strings.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]