[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RFC: enum instead of #define for tokens

From: Hans Aberg
Subject: Re: RFC: enum instead of #define for tokens
Date: Wed, 3 Apr 2002 09:38:15 +0200

At 00:41 +0200 2002/04/03, Hans Aberg wrote:
>As for C++, POSIX does not say anything about that (confirmed by a comment
>somewhere in the C/C++ newsgroups), so you can do what you like. -- If the
>token number type can be changed, it should const, not enum, I think so
>that one can accommodate it. (For example, when using Unicode, one wants an
>integral type containing at least 2^n, n = 20 - 24, non-negative values. --
>There is no guarantee that an int will contain that, even though with
>machines it will.)

Actually, the C++ standard, verse 7.2:5, says:
The underlying type of an enumeration is an integral type that can
represent all the enumerator values defined in the enumeration. It is
implementation-defined which integral type is used as the underlying type
for an enumeration except that the underlying type shall not be larger than
int unless the value of an enumerator cannot fit in an int or unsigned int.

So, if one has a special integral type for Unicode characters, say
unicode_char, with a max value unicode_max, then one can ensure that the
token underlying type is unicode_char by the code
  enum yytoken_type {
    token_error = unicode_max + 1,
if the enumeration fits into that type.

  Hans Aberg

reply via email to

[Prev in Thread] Current Thread [Next in Thread]