[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CSV parsing and other issues (Re: LC_NUMERIC)

From: Eli Zaretskii
Subject: Re: CSV parsing and other issues (Re: LC_NUMERIC)
Date: Thu, 10 Jun 2021 19:57:33 +0300

> From: Maxim Nikulin <manikulin@gmail.com>
> Date: Thu, 10 Jun 2021 23:28:59 +0700
> Cc: boruch_baum@gmx.com
> > We already use nl_langinfo in locale-info, so what exactly
> > are you suggesting here? adding more items?  You don't
> > really expect Lisp programs to format numbers such as
> > 123,456 by hand after learning from locale-info that the
> > thousands separator is a comma, do you?
> I have hijacked Boruch's thread and changed the subject to "CSV 
> parsing".

That explains part of my confusion.  Please try not to hijack
discussions; instead, start a separate thread, to avoid such

For processing CSV, if there's a need to know whether the locale uses
the comma as a decimal separator, we could indeed extend locale-info.
But such an extension is almost trivial and doesn't even touch on the
significant problems in the rest of the discussion.

> I was trying to support Boruch that buffer-local variables may be
> important part of locale context, more precise than global settings,

They are more precise, but they don't support mixed languages in the
same buffer, something that happens in Emacs very frequently.  Which
means they are not precise enough.  So my POV is that we should look
for a way to be able to specify the language of some span of text, in
which case buffers that use a single language will be a special case.

> > And then we have conceptual problems.  For example, in a
> > multilingual editor such as Emacs, the notion of a "buffer
> > language" not always makes sense, you'd need to support
> > portions of text that have different language properties.
> > Imagine switching locales as Emacs processes adjacent
> > stretches of text and other complications.  For example,
> > changing letter-case for a stretch or Turkish text is
> > supposed to be different from the English or German text.
> > I'm all ears for ideas how to design such "language
> > support".  It definitely isn't easy, so if you have ideas,
> > please voice them!
> I never have a consistent vision nor see a conceptual problem. 

Here's  a trivial example:

  (insert (downcase (buffer-substring POS1 POS2)))

Contrast with

  (insert (downcase "FOO"))

The function 'downcase' gets a Lisp string, but it has no way of
knowing whether the string is actually a portion of current buffer's
text.  So how can it apply the correct letter-case conversions, even
if some buffer-local setting specifies that this should be done using
some specific language's rules?

IOW, one of the non-trivial problems is how to process Lisp strings
correctly for these purposes.  Buffers can have local variables, but
what about strings?

> > If you are suggesting that we introduce ICU as a dependency,
> > we could discuss the pros and cons.
> I consider it as the most complete available implementation.  Do you 
> know a comparable alternative?

Yes: what we have already in Emacs.  That covers a lot of the same
Unicode turf that ICU handles, because we import and use the same
Unicode files and tables.  The question is: what is best for the
future development of Emacs in this area: depend on ICU (which would
mean we need to rewrite lots of code that is working well), or extend
what we have to support more Unicode features?  One not-so-trivial
aspect of this is efficiency of fetching character properties (Emacs
has char-tables for that, which are efficient both CPU- and
memory-wise).  Another aspect is support for raw bytes in buffers and
strings.  And there are probably some others.

It is not a simple decision.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]