Proper way to handle ambiguous tokens in PLY - python

I am implementing an existing scripting language in part as a toy project and in part so that I can write my own implementation of the program that uses the language. One of the issues I'm running into is that I have a few constructs that overlap in terms of specification but are more clear when used:
Variables - r'[A-Za-z0-9_]+' # Yes, '456' is a valid variable name
Numbers - r'-?[0-9]+(\.[0-9]+)?'
Macros - r'\#[A-Za-z0-9_]+'
Field Reference - r'(this\.)?([A-Za-z]+\.)*[A-Za-z]+'
Tag reference - r'[A-Za-z0-9_]+\.[A-Za-z0-9_]*\??'
This mostly works, but, for example, "456" could be a number or a variable. "34.567" could be a number or a tag reference (the documentation for the scripting language says that it's a bad idea to start identifiers with numbers, but doesn't outright forbid it). Is there a good way to handle the potential ambiguity of the tokens? Currently, I'm tokenizing the former as variable, and the latter as a number, and handling it later in the parser, but it feels very clumsy.

Is there any need for the tokenizer to distinguish between variables, numbers, field references and tag references? Presumably, the parser will be able to decide which of those categories a particular token falls into, by consulting its symbol table of declared variables and possibly by considering the context in which the token was used. If that's the case, then you can just return a single token for all four cases, which will simplify your lexer and probably your grammar.
There's a general principle of parser design, which is never sufficiently emphasised, so I'll put it in bold here:
Every parser component should do the absolute minimum amount of work necessary to distinguish between correct inputs.
In other words, if the only possibilities are a unique correct parse and an input error, and it's at all difficult to decide at that point which applies, then just pass the decision on to the next phases, where more information is available. Only do the work necessary to distinguish between two or more different correct inputs.
This applies, for example, to trying to do type-checking in the parser. That's a losing proposition; there isn't enough information to do it correctly until semantic analysis is complete and you know what all of the identifiers refer to. More importantly, it adds no benefit to the parser (or the lexer) because it does not affect how a correct input is parsed; all it does is let you identify certain (not all) incorrect inputs. By the above principle, you shouldn't try.
This principle comes up over and over again in parsing. There is always the temptation to try to make error detection "more precise" too early in the parse. Resist! Do error detection only when you have enough information to do it reliably. You'll have to do it at that point anyway, so you're not saving anything by trying to do some of it earlier. Early detection might shave a few microseconds off of a failed parse, but the speed of parsing incorrect inputs is not very important. Always optimise for correct inputs.
This also applies to writing grammars for syntaxes which are not easy to precisely shoehorn into a one-token lookahead grammar. It's OK to let an incorrect input to sneak through the parse and then detect it during semantic analysis. For example, you could try to detect whether built-in function calls have the correct number of arguments. But why bother? Letting a call with too many or too few arguments go through to semantic analysis does not create any ambiguities. There are lots of other examples.
Other big benefits of letting errors trickle down to the semantic analysis are that it's much easier to generate accurate error messages, which are useful for the end user, and that it's much easier to do error recovery, so you can continue processing the input and provide multiple errors and warnings in a single run, another feature your users will appreciate.
There are exceptions to every guideline, so I'm not saying this is an absolute rule. In COBOL, for example, some operators have different parsing precedences depending on their datatype. (No sensible language designer would commit that barbarity today, I hope, but you do need to take it into account for legacy parsers.) You can only pass a decision down the line if it doesn't create ambiguities between correct inputs. But you should always try to keep this guideline in mind.

Related

(python - cpp) - How to split the c++ codes while writing a lexical analyzer in python?

I wrote a lexical analyzer for cpp codes in python, but the problem is when I use input.split(" ") it won't recognize codes like x=2 or function() as three different tokens unless I add an space between them manually, like: x = 2 .
also it fails to recognize the tokens at the beginning of each line.
(if i add spaces between each two tokens and also at the beginning of each line, my code works correctly)
I tried splitting the code first by lines then by space but it got complicated and still I wasn't able to solve the first problem.
Also I thought about splitting it by operators, yet I couldn't actually implement it. plus I need the operators to be recognized as tokens as well, so this might not be a good idea.
I would appreciate it if anyone could give any solution or suggestion, Thank You.
f=open("code.txt")
input=f.read()
input=input.split(" ")
f=open("code.txt")
input=f.read()
input1=input.split("\n")
for var in input1:
var=var.split(" ")
Obviously, if you try to have success splitting such an expression like x=2 and also x = 2... it seems pretty obvious that isn't going to work.
What you are looking is for a solution that works with both right?
Basic solution is to use an and operator, and use the conditions that you need to parse. Note that this solution isn't scalable, neither fits into the category of good practices, but it can help you to figure out better but harder solutions.
if input.split(' ') and input.split('='):
An intermediate solution would be to use regex.
Regex isn't an easy topic, but you can checkout online documentation, and then you have wonderful online tools to check your regex codes.
Regex 101
The last one, would be to convert your input data into an AST, which stands for abstract syntax tree. This is the technique employed by C++ compilers like, for example, Clang.
This last one is a real hard topic, so for figure out a basic lexer, probably will be really time consuming, but maybe it could fit your needs.
The usual approach is to scan the incoming text from left to right. At each character position, the lexical analyser selects the longest string which fits some pattern for a "lexeme", which is either a token or ignored input (whitespace and comments, for example). Then the scan continues at the next character.
Lexical patterns are often described using regular expressions, but the standard regular expression module re is not as much help as it could be for this procedure, because it does not have the facility of checking multiple regular expressions in parallel. (And neither does the possible future replacement, the regex module.) Or, more precisely, the library can check multiple expressions in parallel (using alternation syntax, (...|...|...)), but it lacks an interface which can report which of the alternatives was matched. [Note 1]. So it would be necessary to try every possible pattern one at a time and select whichever one turns out to have the longest match.
Note that the matches are always anchored at the current input point; the lexical analyser does not search for a matching pattern. Every input character becomes part of some lexeme, even if that lexeme is ignored, and lexemes do not overlap.
You can write such an analyser by hand for a simple language, but C++ is hardly a simple language. Hand-built lexical analysers most certainly exist, but all the ones I've seen are thousands of lines of not very readable code. So it's usually easier to build an analyzer automatically using software designed for that purpose. These have been around for a long time -- Lex was written almost 50 years ago, for example -- and if you are planning on writing more than one lexical analyser, you would be well advised to investigate some of the available tools.
Notes
The PCRE2 and Oniguruma regex libraries provide a "callout" feature which I believe could be used for this purpose. I haven't actually seen it used in lexical analysis, but it's a fairly recent addition, particularly for Oniguruma, and as far as I can see, the Python bindings for those two libraries do not wrap the callout feature. (Although, as usual with Python bindings to C libraries, documentation is almost non-existent, so I can't say for certain.)

How do you robustly generate code, given a description of its exact behavior in another language?

I have been reverse engineering a specific black box equation that is part of a system I do not own (do not worry, it's white-hat), in which you can only measure the inputs (a large set of integers) and outputs (two integers).
This system can only be perfectly described as a program/function in which all the input integers are used, and so far I can perfectly describe the behavior by creating a data structure that has named "mathematical terms" in which each named input integer lives, and each term has an ordering for the inputs that it owns. I also have a function that takes the model description, and a set of named inputs, and outputs two integers. So the mapping of lists of input names to program behavior lives in here and in the model description in tandem.
I've been programming the reverse engineering utility in python, but ultimately I want to output a low level lua program that represents this function in a less abstract manner. When there were less terms in the model, it was simple to manually write a "transpiler" from this model (in python) to lua, but as the complexity grows it's painful to rewrite the code generator for new types of terms, especially in an ad-hoc manner.
From reading other questions about similar systems, it seems the very last two steps of this process would be: generating an abstract syntax tree representing my desired program, and giving the ast to a lua prettyprinter to generate the code. But I'm not sure if there's useful abstractions that I'm unaware of that help me generate a lua ast, given my current description of the model.
What you're looking for is an abstract syntax tree, which can define the behavior of a program through a graph. Since each component of an abstract syntax tree is highly compartmentalized (eg, "Add", "Number Constant"...) is it extraordinarily efficient to translate an abstract syntax tree back into a high-level programming language, such as Lua.
Abstract syntax trees are used in many compilers and transpilers, so you will not be digging long to find good examples.
CSharp.Lua does a similar thing to what you want; transpiling C# to Lua using a simple abstract syntax tree and a slightly less simple code generator.
Speedy Web Compiler contains an excellent implementation of a javascript code generator
ESBuild also has a well-done implementation for javascript.

library for transforming a node tree

I'd like to be able to express a general transformation of one tree into another without writing a bunch of repetitive spaghetti code. Are there any libraries to help with this problem? My target language is Python, but I'll look at other languages as long as it's feasible to port to Python.
Example: I'd like to transform this node tree: (please excuse the S-expressions)
(A (B) (C) (D))
Into this one:
(C (B) (D))
As long as the parent is A and the second ancestor is C, regardless of context (there may be more parents or ancestors). I'd like to express this transformation in a simple, concise, and re-usable way. Of course this example is very specific. Please try to address the general case.
Edit: RefactoringNG is the kind of thing I'm looking for, although it introduces an entirely new grammar to solve the problem, which i'd like to avoid. I'm still looking for more and/or better examples.
Background:
I'm able to convert python and cheetah (don't ask!) files into tokenized tree representations, and in turn convert those into lxml trees. I plan to then re-organize the tree and write-out the results in order to implement automated refactoring. XSLT seems to be the standard tool to rewrite XML, but the syntax is terrible (in my opinion, obviously) and nobody at our shop would understand it.
I could write some functions which simply use the lxml methods (.xpath and such) to implement my refactorings, but I'm worried that I will wind up with a bunch of purpose-built spaghetti code which can't be re-used.
Let's try this in Python code. I've used strings for the leaves, but this will work with any objects.
def lift_middle_child(in_tree):
(A, (B,), (C,), (D,)) = in_tree
return (C, (B,), (D,))
print lift_middle_child(('A', ('B',), ('C',), ('D',))) # could use lists too
This sort of tree transformation is generally better performed in a functional style - if you create a bunch of these functions, you can explicitly compose them, or create a composition function to work with them in a point-free style.
Because you've used s-expressions, I assume you're comfortable representing trees as nested lists (or the equivalent - unless I'm mistaken, lxml nodes are iterable in that way). Obviously, this example relies on a known input structure, but your question implies that. You can write more flexible functions, and still compose them, as long as they have this uniform interface.
Here's the code in action: http://ideone.com/02Uv0i
Now, here's a function to reverse children, and using that and the above function, one to lift and reverse:
def compose2(a,b): # might want to get this from the functional library
return lambda *x: a(b(*x))
def compose(*funcs): #compose(a,b,c) = a(b(c(x))) - you might want to reverse that
return reduce(compose2,funcs)
def reverse_children(in_tree):
return in_tree[0:1] + in_tree[1:][::-1] # slightly cryptic, but works for anything subscriptable
lift_and_reverse = compose(reverse_children,lift_middle_child) # right most function applied first - if you find this confusing, reverse order in compose function.
print lift_and_reverse(('A', ('B',), ('C',), ('D',)))
What you really want IMHO is an program transformation system, which allows you to parse and transform code using the patterns expressed in the surface syntax of the source code (and even the target language) to express the rewrites directly.
You will find that even if you can get your hands on an XML representation of the Python tree, that the effort to write an XSLT/XPath transformation is more than you expect; trees representing real code are messier than you'd expect, XSLT isn't that convenient a notation, and it cannot express directly common conditions on trees that you'd like to check (e.g., that two subtrees are the same). An final complication with XML: assume its has been transformed. How do you regenerate the source code syntax from which came? You need some kind of prettyprinter.
A general problem regardless of how the code is represented is that without information about scopes and types (where you can get it), writing correct transformations is pretty hard. After all, if you are going to transform python into a language that uses different operators for string concat and arithmetic (unlike Java which uses "+" for both), you need to be able to decide which operator to generate. So you need type information to decide. Python is arguably typeless, but in practice most expressions involve variables which have only one type for their entire lifetime. So you'll also need flow analysis to compute types.
Our DMS Software Reengineering Toolkit has all of these capabilities (parsing, flow analysis, pattern matching/rewriting, prettyprinting), and robust parsers for many languages including Python. (While it has flow analysis capability instantiated for C, COBOL, Java, this is not instantiated for Python. But then, you said you wanted to do the transformation regardless of context).
To express your rewrite in DMS on Python syntax close to your example (which isn't Python?)
domain Python;
rule revise_arguments(f:IDENTIFIER,A:expression,B:expression,
C:expression,D:expression):primary->primary
= " \f(\A,(\B),(\C),(\D)) "
-> " \f(\C,(\B),(\D)) ";
The notation above is the DMS rule-rewriting language (RSL). The "..." are metaquotes that separate Python syntax (inside those quotes, DMS knows it is Python because of the domain notation declaration) from the DMS RSL language. The \n inside the meta quote refers to the syntax variable placeholders of the named nonterminal type defined in the rule parameter list. Yes, (...) inside the metaquotes are Python ( ) ... they exist in the syntax trees as far as DMS is concerned, because they, like the rest of the language, are just syntax.
The above rule looks a bit odd because I'm trying to follow your example as close as possible, and from and expression language point of view, your example is odd precisely because it does have unusual parentheses.
With this rule, DMS could parse Python (using its Python parser) like
foobar(2+3,(x-y),(p),(baz()))
build an AST, match the (parsed-to-AST) rule against that AST, rewrite it to another AST corresponding to:
foobar(p,(x-y),(baz()))
and then prettyprint the surface syntax (valid) python back out.
If you intended your example to be a transformation on LISP code, you'd
need a LISP grammar for DMS (not hard to build, but we don't have much
call for this), and write corresponding surface syntax:
domain Lisp;
rule revise_form(A:form,B:form, C:form, D:form):form->form
= " (\A,(\B),(\C),(\D)) "
-> " (\C,(\B),(\D)) ";
You can get a better feel for this by looking at Algebra as a DMS domain.
If your goal is to implement all this in Python... I don't have much help.
DMS is a pretty big system, and it would be a lot of effort to replicate.

Partial evaluation with pyparsing

I need to be able to take a formula that uses the OpenDocument formula syntax, parse it into syntax that Python can understand, but without evaluating the variables, and then be able to evaluate the formula many times with changing valuables for the variables.
Formulas can be user input, so pyparsing allows me to both effectively handle the formula syntax, and clean user input. There are a number of good examples of pyparsing available, but all the mathematical ones seem to assume that one evaluates everything in the current scope immediately.
For context, I am working with a model of the industrial economy (life cycle assessment, or LCA), where these formulas represent the amount of material or energy exchanges between processes. The variable amount can be a function of several parameters, such as geographical location. THe chain of formula and variable references are stored in a directed acyclic graph, so that formulas can always be simply evaluated. Formulas are stored as strings in a database.
My questions are:
Is it possible to parse a formula such that the parsed evaluation can also be stored in the database (as a string to be evaled, or something else)?
Are there alternatives to this approach? Bear in mind that the ideal solution is to parse/write once, and read many times. For example, partially parsing the formula, and then using the ast module, although I don't know how this could work with database storage.
Any examples of a project or library similar to this that I could look over? I am not a programmer, just a student trying to finish his thesis while making an open-source LCA software model in my spare time.
Is this approach too slow? I would like to be able to do substantial Monte Carlo runs, where each run could involve tens of thousands of formula evaluations (it is a big database).
1) Yes, it is possible to pickle the results from parsing your expression, and save that to a database. Then you can just fetch and unpickle the expression, rather than reparse the original again.
2) You can do a quick-and-dirty pass at this just using the compile and eval built-ins, as in the following interactive session:
>>> y = compile("m*x+b","","eval")
>>> m = 100
>>> x = 5
>>> b = 1
>>> eval(y)
501
Of course, this has the security pitfalls of any eval- or exec-based implementation, in that untrusted or malicious source strings can embed harmful system calls. But if this is your thesis and entirely within your scope of control, just don't do anything foolish.
3) You can get an online example of parsing an expression into a "evaluatable" data structure at the pyparsing wiki's Examples page. Check out simpleBool.py and evalArith.py especially. If you're feeling flush, order a back issue of the May,2008 issue of Python magazine, which has my article "Writing a Simple Interpreter/Compiler with Pyparsing" with a more detailed description of the methods used, plus a description of how pickling and unpickling the parsed results works.
4) The slow part will be the parsing, so you are on the right track in preserving these results in some intermediate and repeatably-evaluatable form. The eval part should be fairly snappy. The second slow part will be in fetching these pickled structures from your database. During your MC run, I would package a single function that takes the selection parameters for an expression, fetches from the database, and unpickles and returns the evaluatable expression. Then once you have this working, use a memoize decorator to cache these query-results pairs, so that any given expression only needs to be fetched/unpickled once.
Good luck with your thesis!

Building an Inference Engine in Python

I am seeking direction and attempting to label this problem:
I am attempting to build a simple inference engine (is there a better name?) in Python which will take a string and -
1 - create a list of tokens by simply creating a list of white space separated values
2 - categorise these tokens, using regular expressions
3 - Use a higher level set of rules to make decisions based on the categorisations
Example:
"90001" - one token, matches the zipcode regex, a rule exists for a string containing just a zipcode causes a certain behaviour to occur
"30 + 14" - three tokens, regexs for numerical value and mathematical operators match, a rule exists for a numerical value followed by a mathematical operator followed by another numerical value causes a certain behaviour to occur
I'm struggling with how best to do step #3, the higher level set of rules. I'm sure that some framework must exist. Any ideas? Also, how would you characterise this problem? Rule based system, expert system, inference engine, something else?
Thanks!
I'm very surprised that step #3 is the one giving you trouble...
Assuming you can label/categorize properly each token (and that prior to categorization you can find the proper tokens, as there may be many ambiguous cases...), the "Step #3" problem seems one that could easily be tackled with a context free grammar where each of the desired actions (such as ZIP code lookup or Mathematical expression calculation...) would be symbols with their production rule itself made of the possible token categories. To illustrate this in BNF notation, we could have something like
<SimpleMathOperation> ::= <NumericalValue><Operator><NumericalValue>
Maybe your concern is that when things get complicated, it will become difficult to express the whole requirement in terms of non-conflicting grammar rules. Or maybe your concern is that one could add rules dynamically, hence forcing the grammar "compilation" logic to be integrated with the program ? Whatever the concern, I think that this 3rd step will comparatively be trivial.
On the other hand, and unless the various categories (and underlying input text) are such that they can be described with a regular language as well (as you seem to hint in the question), a text parser and classifier (Steps #1 and #2...) is typically a less than trivial affair..
Some example Python libraries that simplify writing and evaluating grammars:
pijnu
pyparsing
It looks like you search for "grammar inference" (grammar induction) library.

Categories

Resources