[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Emacs-diffs] master db828f6: Don't rely on defaults in decoding UTF

From: David Kastrup
Subject: Re: [Emacs-diffs] master db828f6: Don't rely on defaults in decoding UTF-8 encoded Lisp files
Date: Sun, 27 Sep 2015 09:42:06 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.0.50 (gnu/linux)

Eli Zaretskii <address@hidden> writes:

> I've also looked at the *.po files in the latest releases of GNU Make,
> Gawk, Texinfo, and Binutils, and I find that between 20% and 25% of
> such files still use non-UTF-8 encodings.

Which, btw, I consider crazy.  It's one thing to pick an encoding for
local language processing and display.  But for an internationalization
system, it does not really make sense to venture to local encodings
outside of I/O.  There is a really strong case for using only UTF-8 in
PO files instead of juggling with many-to-many encoding setups.

> I see similar figures for the txi-*.tex files that came with Texinfo
> 6.0.  Presumably, that follows the default conventions of the
> respective locales.

Texinfo uses PDFTeX for its encoding processing, and PDFTeX is firmly an
8-bit system.  TeX wouldn't be TeX if it wasn't macroprogrammed to deal
with that, but Texinfo being a rather low-level format, UTF-8 processing
time dwarves anything else.

So if you have, say, a German input file for Texinfo and can process it
either in Latin-1 or UTF-8, chances are that the Latin-1 version runs
more than twice as fast.

Now that's of course just the processing in printed form.  Thanks to
Texinfo now being written in Perl, the PDFTeX backend is likely the
fastest right now either way so it may not be as much of a concern.

But many Texinfo sources originate from a time where UTF-8 was either
not supported at all or was a major contributor to conversion time.

David Kastrup

reply via email to

[Prev in Thread] Current Thread [Next in Thread]