[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RFC: enum instead of #define for tokens

From: Paul Eggert
Subject: Re: RFC: enum instead of #define for tokens
Date: Wed, 3 Apr 2002 09:01:05 -0800 (PST)

> From: Akim Demaille <address@hidden>
> Date: 03 Apr 2002 15:56:47 +0200

> It does look good, but would some people really use it?  Now that I
> know we are bound by POSIX to #define, I'm tempted to leave it as is
> for C, and provide something better only for C++.  What is your opinion?

The main advantage of having an enum is that the enum constants
are visible to GDB, so that one can type, for example:

  (gdb) p var = FOO

and assign FOO to a variable.  This does not work with #defines.

It would be even nicer if the enum type were used when possible, with
internal variables for example, of course continuing to use 'int' for
the existing interfaces.  That way, when you use GDB to print an
internal variable, it will print as FOO and not as 257.

This nicety is not essential, so I don't think it should be put into
the maintenance branch.  But it would be useful to me when debugging a
parser.  Also, the nicety should be used only when the compiler is not
overly pedantic about int versus enum, so it should be controlled by a
preprocessing symbol to disable it.  I would enable it by default if
__STDC__ is defined.

A nice little project for someone who has the time....

Two more topics are related to this one.

1.  Currently Bison consistently uses 'short' for some variables, even
though the values might not fit into 'short'.  For example, yyr1 is
declared to be of type short, but it should be the enum that we're
talking about, in case the user declares more than 32768 - 256 - 1
tokens.  There are several other instances of this problem not related
to the enum; for example, line numbers do not always fit into 'int' on
common 32-bit hosts.  It will be a pain to fix all the problems, I'm

2.  Currently Bison assumes 8-bit bytes (i.e. that UCHAR_MAX is 255).
It also assumes that the 8-bit character encoding is the same for the
invocation of 'bison' as it is for the invocation of 'cc', but this is
not necessarily true when people run bison on an ASCII host and then
use cc on an EBCDIC host.  I don't think these topics are worth our
time addressing (unless we find a gung-ho volunteer for EBCDIC or
PDP-10 ports :-) but they should probably be documented somewhere.

Whew!  I'd better stop discursing....

> I think I was wrong when I said that the trick you implemented for
> 1.3x was not to be installed in 1.5x.  It should probably be applied
> there too.

Sorry, which trick was that?  I've installed so many lately.  Or
perhaps you mean the cumulative set of tricks for C++?

reply via email to

[Prev in Thread] Current Thread [Next in Thread]