help-bison
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Wrapping a GUI around flex-bison engine


From: Hans Aberg
Subject: Re: Wrapping a GUI around flex-bison engine
Date: Sat, 13 Nov 2004 19:54:19 +0100
User-agent: Microsoft-Outlook-Express-Macintosh-Edition/5.0.6

On 2004/11/12 02:00, Andrej Prsa at address@hidden wrote:

>> I am not sure what exactly you have in your mind here: The Bison parser
>> processes a token stream. It does not matter how this token stream is
>> generated. You can create it via GUI or a standard text stream, which is
>> tokenized using a lexer.
> 
> Exactly. The question I'm asking here is on style. Imagine I have a GUI
> that takes input from the user for, say, 5 parameters. The user changes
> the values of these parameters using, say, a spin-button, one by one. What
> would be the best practice to feed these changes to the lexer-parser
> system? Is it best to do it one-by-one or to buffer the changes somehow?
> What about the feedback? In example, how should one tackle the changes
> that have been induced from the scripter interactively on the GUI level -
> should it be automatically updated or not, ... In example, changing one
> parameter in the GUI causes a change of another parameter which is done by
> the computation by the back-end (the parser). I see many caveats here and
> before I indulge into re-inventing hot water, I thought I'd ask...

The Bison parser processes an input in one batch, and is not at all suitable
for admitting partial changes to input. In fact, handling of such things is
a research problem. You may have some look at the papers by Susan L. Graham
<address@hidden>.

In other words, with Bison, you will have to refeed the whole input, and
invoke a new pareser run. If you want to struggle with Bison to refine that,
so will have to wrestle with the fact that the parser may or may not need a
one token lookahead, and there is no immediate way to know which one.

>>> Furthermore, I'd like to have a
>>> plugin-aware engine, so that new pieces of code may be added to expand
>>> the basic functionality - I suppose the GUI is thus just one of
>>> perhaps many plugins to the engine. So if you have any ideas or
>>> recommendations, I'd be very grateful to hear them!
>> 
>> If you somehow want to create a dynamically extensible grammar, forget
>> about Bison: at best, one can create dynamic grammar subparts parts, if
>> the subparts are written by hand or some other tool. Bison uses a parser
>> generating algorithm, LALR(1), which treats the grammar as one whole,
>> and a change in the input grammar requires a complete recomputation of
>> the output parser.
> 
> This is clear. In fact, the lexer-parser-ast engine is indeed fixed (i.e.
> not dynamic). I was referring to the general plugin system that would
> communicate with the engine by properly introducing itself. Again, an
> example counts for 1000 words: imagine I have a simple parser calculator
> that performs all basic operations. Furthermore, assume I have an external
> function written in C (completely independent of the parser and functions
> defined therein). For the sake of the argument, let's say it's a function
> for calculating the factorial. What I'd like is for this function to act
> as a plug-in: to introduce itself to the engine and to add the
> corresponding command to it, e.g. factorial (arg) - something like
> #include statements in C. The engine should then be able to parse this
> #factorial (arg) and if encountered, it should call the appropriate C
> function.

Then you just invoke this this function as a DLL via the normal C-extesnions
you must hav in place. Bison just writes out the actions essentially
literally. You can the write grammar parts that loads this DLL as you wish.

> I don't think such add-ons must change the grammar - there can always be a
> hook present in the lexer to catch all prescribed keywords and a generated
> AST node that calls a suitable C handler. My question is what is the best
> way to make this C handler to be sensitive to plug-ins. The idea I have is
> to use a symbol table - for each introduced function from the external C
> file I'd create a symbol pointing to the corresponding AST node. Whenever
> a symbol is matched, the function would get executed. I just don't know
> whether this is the best way.

The way I handle definitions in stacked environments is to use a stacked
lookup table. On each level, there is a C++ std::map, where the key is s
std::string. The values is a pair, the semantic value, and a number, the
parser token number of the defined variable. Somewhat adapted to your
situation, suppose the input code is
    function f1;
    ...
    f1 := dll("foo", "bar");
Then the parser would create a variable object of type "function", and put
the value of say
  %token function_variable
ont the lookup table. The later you would have some grammar code looking
like
%token assignment_key ":="
...
%token dll
%token name
%token semicolon ";"
%token rp "("
%token lp ")"
%token comma ","
%%
...
  assigment:
      function_variable ":=" dll "(" name "," name ")" ";" {
        variable* vp = lookup of $1 on lookup table.
        vp = dll_load($5, $7);
      }
    |
    Š.
  ;
The lexer, when it finds the name "f1" will look at the table and return the
token value, which is a function_variable token. If a name is not found, it
is undefined, amd must be a part of definition of that name, or an error.

> I am indeed very grateful for your time and effort; please forgive the
> generality of my questions, I'm an astrophysicist trying to write a clean
> and decent code for a change! ;)

The learning curve is steep, but once learned, it is most gratifying.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]