I need to be able to take a formula that uses the OpenDocument formula syntax, parse it into syntax that Python can understand, but without evaluating the variables, and then be able to evaluate the formula many times with changing valuables for the variables.
Formulas can be user input, so pyparsing allows me to both effectively handle the formula syntax, and clean user input. There are a number of good examples of pyparsing available, but all the mathematical ones seem to assume that one evaluates everything in the current scope immediately.
For context, I am working with a model of the industrial economy (life cycle assessment, or LCA), where these formulas represent the amount of material or energy exchanges between processes. The variable amount can be a function of several parameters, such as geographical location. THe chain of formula and variable references are stored in a directed acyclic graph, so that formulas can always be simply evaluated. Formulas are stored as strings in a database.
My questions are:
Is it possible to parse a formula such that the parsed evaluation can also be stored in the database (as a string to be evaled, or something else)?
Are there alternatives to this approach? Bear in mind that the ideal solution is to parse/write once, and read many times. For example, partially parsing the formula, and then using the ast module, although I don't know how this could work with database storage.
Any examples of a project or library similar to this that I could look over? I am not a programmer, just a student trying to finish his thesis while making an open-source LCA software model in my spare time.
Is this approach too slow? I would like to be able to do substantial Monte Carlo runs, where each run could involve tens of thousands of formula evaluations (it is a big database).
1) Yes, it is possible to pickle the results from parsing your expression, and save that to a database. Then you can just fetch and unpickle the expression, rather than reparse the original again.
2) You can do a quick-and-dirty pass at this just using the compile and eval built-ins, as in the following interactive session:
>>> y = compile("m*x+b","","eval")
>>> m = 100
>>> x = 5
>>> b = 1
>>> eval(y)
501
Of course, this has the security pitfalls of any eval- or exec-based implementation, in that untrusted or malicious source strings can embed harmful system calls. But if this is your thesis and entirely within your scope of control, just don't do anything foolish.
3) You can get an online example of parsing an expression into a "evaluatable" data structure at the pyparsing wiki's Examples page. Check out simpleBool.py and evalArith.py especially. If you're feeling flush, order a back issue of the May,2008 issue of Python magazine, which has my article "Writing a Simple Interpreter/Compiler with Pyparsing" with a more detailed description of the methods used, plus a description of how pickling and unpickling the parsed results works.
4) The slow part will be the parsing, so you are on the right track in preserving these results in some intermediate and repeatably-evaluatable form. The eval part should be fairly snappy. The second slow part will be in fetching these pickled structures from your database. During your MC run, I would package a single function that takes the selection parameters for an expression, fetches from the database, and unpickles and returns the evaluatable expression. Then once you have this working, use a memoize decorator to cache these query-results pairs, so that any given expression only needs to be fetched/unpickled once.
Good luck with your thesis!
Related
I am implementing an existing scripting language in part as a toy project and in part so that I can write my own implementation of the program that uses the language. One of the issues I'm running into is that I have a few constructs that overlap in terms of specification but are more clear when used:
Variables - r'[A-Za-z0-9_]+' # Yes, '456' is a valid variable name
Numbers - r'-?[0-9]+(\.[0-9]+)?'
Macros - r'\#[A-Za-z0-9_]+'
Field Reference - r'(this\.)?([A-Za-z]+\.)*[A-Za-z]+'
Tag reference - r'[A-Za-z0-9_]+\.[A-Za-z0-9_]*\??'
This mostly works, but, for example, "456" could be a number or a variable. "34.567" could be a number or a tag reference (the documentation for the scripting language says that it's a bad idea to start identifiers with numbers, but doesn't outright forbid it). Is there a good way to handle the potential ambiguity of the tokens? Currently, I'm tokenizing the former as variable, and the latter as a number, and handling it later in the parser, but it feels very clumsy.
Is there any need for the tokenizer to distinguish between variables, numbers, field references and tag references? Presumably, the parser will be able to decide which of those categories a particular token falls into, by consulting its symbol table of declared variables and possibly by considering the context in which the token was used. If that's the case, then you can just return a single token for all four cases, which will simplify your lexer and probably your grammar.
There's a general principle of parser design, which is never sufficiently emphasised, so I'll put it in bold here:
Every parser component should do the absolute minimum amount of work necessary to distinguish between correct inputs.
In other words, if the only possibilities are a unique correct parse and an input error, and it's at all difficult to decide at that point which applies, then just pass the decision on to the next phases, where more information is available. Only do the work necessary to distinguish between two or more different correct inputs.
This applies, for example, to trying to do type-checking in the parser. That's a losing proposition; there isn't enough information to do it correctly until semantic analysis is complete and you know what all of the identifiers refer to. More importantly, it adds no benefit to the parser (or the lexer) because it does not affect how a correct input is parsed; all it does is let you identify certain (not all) incorrect inputs. By the above principle, you shouldn't try.
This principle comes up over and over again in parsing. There is always the temptation to try to make error detection "more precise" too early in the parse. Resist! Do error detection only when you have enough information to do it reliably. You'll have to do it at that point anyway, so you're not saving anything by trying to do some of it earlier. Early detection might shave a few microseconds off of a failed parse, but the speed of parsing incorrect inputs is not very important. Always optimise for correct inputs.
This also applies to writing grammars for syntaxes which are not easy to precisely shoehorn into a one-token lookahead grammar. It's OK to let an incorrect input to sneak through the parse and then detect it during semantic analysis. For example, you could try to detect whether built-in function calls have the correct number of arguments. But why bother? Letting a call with too many or too few arguments go through to semantic analysis does not create any ambiguities. There are lots of other examples.
Other big benefits of letting errors trickle down to the semantic analysis are that it's much easier to generate accurate error messages, which are useful for the end user, and that it's much easier to do error recovery, so you can continue processing the input and provide multiple errors and warnings in a single run, another feature your users will appreciate.
There are exceptions to every guideline, so I'm not saying this is an absolute rule. In COBOL, for example, some operators have different parsing precedences depending on their datatype. (No sensible language designer would commit that barbarity today, I hope, but you do need to take it into account for legacy parsers.) You can only pass a decision down the line if it doesn't create ambiguities between correct inputs. But you should always try to keep this guideline in mind.
Which is preferred for mapping values: reusing mapping functions OR building reference tables to do lookups?
This is a very general high-level question, which, I believe is mostly language-independent. For the example, I will use SAS, but I also would like to know how people feel about this in R and python.
I am on a team that likes to use mapping functions to take an input and generate some output, usually in a one-to-one manner. For example, here is some modified SAS code that shows what my team typically does:
proc fcmp;
function mapFunction($ input);
output = .;
if input= "Day zero" then output= 0;
else if input = "Day one" then output = 1;
else if input = "Day two" then output = 2;
return(output);
endsub;
run;
The team would then use mapFunction in multiple other programs when mapping needs to be done.
This goes against my instinct. My instinct would be to use the mapping function once to generate a set of key:value pairs or a dictionary or two-column table and thereafter use lookup/indexing/joining/merging operations to refer to the table.
I have a hard time explaining why this is my instinct, but it is. It feels better to have a reference table instead of calling a custom function repeatedly. However, I don't trust my feelings, and I'd like to hear from others to see if I can rationalize one technique over the other.
Can someone provide arguments/counter-arguments to support one technique or the other?
Thank you!
I've done a lot of conversions, which involve re-mapping codes. Your situation may be a little different. Conversions being one-off events, rather than on-going transformations like overnight DW refreshes.
The default position is to use a table, as you suggest. It has a real benefit in that a) you can query data to check for non-existent values in the map before you convert; b) the responsibility goes back to the business to confirm the code mappings are correct.
It gets trickier where the business rules for the conversion are in flux. For example, your first cut might provide a simple map, but the someone says 'except for these codes, where you need to look up an extra value' and suddenly you need a function. At that point, you need to revisit the code base to find where the mapped table is used and update to a function. That can be a lesson learned, depending on the size of the code base.
I have used functions to wrap the use of a mapping table. If you don't know which code is going to be problematic in a month's time, this can be useful. Function just selects a value from a mapping table as you would do in a join. It will be slower to execute but if you're optimizing programmer hours rather than execution time then that works. In any case, you get the data visibility from the code map as well as flexibility for scope-creep from the function. When the scope changes, just update the function.
When you say 'multiple other programs', sounds like a red flag. If you store the data in mapping tables, would the database in which they are stored be visible to these programs? That would be a consideration, big one.
What's your evaluation criteria?
Are you looking for:
program run time
developer coding time
usability
ease of maintenance
Different people prioritize different things. The one good thing about the table method is that it's generic, simple and makes sense in almost any language. But...that does not mean it's the best for that language. How each language implements things under the hood will affect the criteria above.
In SAS, I would rarely use FCMP, but Formats are one of the fastest ways there and really have a table behind it that can be easily updated, so that would be my choice in SAS. It's slightly slower than the HASH and SET+KEY methods but it's easier to use and maintain and those end up saving more time in the long run. And I prioritize human time over computer time for 99% of my projects.
If you search on lexjansen.com you'll find many papers on the different ways to do look ups and a comparison of their run times. https://www.lexjansen.com/phuse/2007/cs/CS06.pdf
I have converted my verilog file to AST(abstrct syntax tree) but along with external constraints like the output for the circuit and the AST is to be given to Z3/SMT solver which should give us the inputs for the circuit, but I have no idea how can I give AST as the inputs for Z3/SMT solver.
Thanks in advance.
Such a task typically amounts to walking over your AST and symbolically executing it, and generating a trace for the SMT solver. This is easier said then done, unfortunately: there are many facets of doing this translation and even when done fully, it is far from easy for a solver to verify the corresponding properties. For full Verilog, you'd have to essentially implement a Verilog simulator that can deal with symbolic values. While this can be a very large task, perhaps you can get away with a much smaller set of features, if your inputs are "simple" enough. Without knowing anything about how your Verilog is structured, it's really hard to say anything.
This paper, penned by the two main authors of Z3 (Nikolaj and Leonardo) provides a good survey of the approach. It's an excellent read with many useful references. Starting with that can at least give you an idea of what's involved.
I should add that verification of Verilog designs is a topic that has industrial applications, and there are vendor supported tools (not cheap!) to do verification at the Verilog level. The Jasper Gold tool from Cadence is one such example. Synopsys also has a similar tool.
It seems you are interested in test-case generation. That would correspond to writing a typical "cover" property, and reading off of the values to primary inputs that lead to the cover scenario in such a setting. Such properties are typically written in the SVA format, which is understood by such tools.
I have a module to test, module includes a serie of functions / simple classes.
Wondering if there any attempts(ie package) to generate automatically:
1) Generate Python code from initial Python file containing function definition.
2) This code list of call to the functions with random/parametric data as parameters.
It is technically feasible by using inspect and python meta classes,
usually limited to numerical type functions....(numpy array).
Because string (ie url input) would be impossible (only parametrized...).
EDIT: By random, it means obviously "parametric random".
Suppose we have
def f(x1,x2,x3)
For all xi of f
if type(xi) = array1D ->
Do those tests: empty array, zeros array, negative array(random),
positivearray(random), high values, low values, integer array, real
number array, ordered array, equal space array,.....
if type(xi)=int -> test zero, 1, 2,3,4, randomValues, Negative
Do people think such project is possible using inspect and meta class? (limited to numpy/numerical items).
Suppose you have a very large library..., things can be done in background.
You might be thinking of fuzz testing, where a bunch of garbage data is submitted to a function to see if anything makes it behave badly. It sounds like the Hypothesis library will let you generate different test cases based on some parameters.
I spent searching, it seems this kind of project does not really exist (to my knowledge):
Technically, this is a mix of packages (issues):
Hypothese : data generation for input, running the code with crash/error.
(without the invariant part of Hypothese)
Jedi: Static analysis of code/Inference of the type
Type inference is a difficult issue in Python (in general)
implementing type inference
If type is num/array of num:
Boundary exists/ typical usage is clearly defined
If type is string: Inference is pretty difficult without human guessing.
Same for others, Context guessing is important
I'd like to be able to express a general transformation of one tree into another without writing a bunch of repetitive spaghetti code. Are there any libraries to help with this problem? My target language is Python, but I'll look at other languages as long as it's feasible to port to Python.
Example: I'd like to transform this node tree: (please excuse the S-expressions)
(A (B) (C) (D))
Into this one:
(C (B) (D))
As long as the parent is A and the second ancestor is C, regardless of context (there may be more parents or ancestors). I'd like to express this transformation in a simple, concise, and re-usable way. Of course this example is very specific. Please try to address the general case.
Edit: RefactoringNG is the kind of thing I'm looking for, although it introduces an entirely new grammar to solve the problem, which i'd like to avoid. I'm still looking for more and/or better examples.
Background:
I'm able to convert python and cheetah (don't ask!) files into tokenized tree representations, and in turn convert those into lxml trees. I plan to then re-organize the tree and write-out the results in order to implement automated refactoring. XSLT seems to be the standard tool to rewrite XML, but the syntax is terrible (in my opinion, obviously) and nobody at our shop would understand it.
I could write some functions which simply use the lxml methods (.xpath and such) to implement my refactorings, but I'm worried that I will wind up with a bunch of purpose-built spaghetti code which can't be re-used.
Let's try this in Python code. I've used strings for the leaves, but this will work with any objects.
def lift_middle_child(in_tree):
(A, (B,), (C,), (D,)) = in_tree
return (C, (B,), (D,))
print lift_middle_child(('A', ('B',), ('C',), ('D',))) # could use lists too
This sort of tree transformation is generally better performed in a functional style - if you create a bunch of these functions, you can explicitly compose them, or create a composition function to work with them in a point-free style.
Because you've used s-expressions, I assume you're comfortable representing trees as nested lists (or the equivalent - unless I'm mistaken, lxml nodes are iterable in that way). Obviously, this example relies on a known input structure, but your question implies that. You can write more flexible functions, and still compose them, as long as they have this uniform interface.
Here's the code in action: http://ideone.com/02Uv0i
Now, here's a function to reverse children, and using that and the above function, one to lift and reverse:
def compose2(a,b): # might want to get this from the functional library
return lambda *x: a(b(*x))
def compose(*funcs): #compose(a,b,c) = a(b(c(x))) - you might want to reverse that
return reduce(compose2,funcs)
def reverse_children(in_tree):
return in_tree[0:1] + in_tree[1:][::-1] # slightly cryptic, but works for anything subscriptable
lift_and_reverse = compose(reverse_children,lift_middle_child) # right most function applied first - if you find this confusing, reverse order in compose function.
print lift_and_reverse(('A', ('B',), ('C',), ('D',)))
What you really want IMHO is an program transformation system, which allows you to parse and transform code using the patterns expressed in the surface syntax of the source code (and even the target language) to express the rewrites directly.
You will find that even if you can get your hands on an XML representation of the Python tree, that the effort to write an XSLT/XPath transformation is more than you expect; trees representing real code are messier than you'd expect, XSLT isn't that convenient a notation, and it cannot express directly common conditions on trees that you'd like to check (e.g., that two subtrees are the same). An final complication with XML: assume its has been transformed. How do you regenerate the source code syntax from which came? You need some kind of prettyprinter.
A general problem regardless of how the code is represented is that without information about scopes and types (where you can get it), writing correct transformations is pretty hard. After all, if you are going to transform python into a language that uses different operators for string concat and arithmetic (unlike Java which uses "+" for both), you need to be able to decide which operator to generate. So you need type information to decide. Python is arguably typeless, but in practice most expressions involve variables which have only one type for their entire lifetime. So you'll also need flow analysis to compute types.
Our DMS Software Reengineering Toolkit has all of these capabilities (parsing, flow analysis, pattern matching/rewriting, prettyprinting), and robust parsers for many languages including Python. (While it has flow analysis capability instantiated for C, COBOL, Java, this is not instantiated for Python. But then, you said you wanted to do the transformation regardless of context).
To express your rewrite in DMS on Python syntax close to your example (which isn't Python?)
domain Python;
rule revise_arguments(f:IDENTIFIER,A:expression,B:expression,
C:expression,D:expression):primary->primary
= " \f(\A,(\B),(\C),(\D)) "
-> " \f(\C,(\B),(\D)) ";
The notation above is the DMS rule-rewriting language (RSL). The "..." are metaquotes that separate Python syntax (inside those quotes, DMS knows it is Python because of the domain notation declaration) from the DMS RSL language. The \n inside the meta quote refers to the syntax variable placeholders of the named nonterminal type defined in the rule parameter list. Yes, (...) inside the metaquotes are Python ( ) ... they exist in the syntax trees as far as DMS is concerned, because they, like the rest of the language, are just syntax.
The above rule looks a bit odd because I'm trying to follow your example as close as possible, and from and expression language point of view, your example is odd precisely because it does have unusual parentheses.
With this rule, DMS could parse Python (using its Python parser) like
foobar(2+3,(x-y),(p),(baz()))
build an AST, match the (parsed-to-AST) rule against that AST, rewrite it to another AST corresponding to:
foobar(p,(x-y),(baz()))
and then prettyprint the surface syntax (valid) python back out.
If you intended your example to be a transformation on LISP code, you'd
need a LISP grammar for DMS (not hard to build, but we don't have much
call for this), and write corresponding surface syntax:
domain Lisp;
rule revise_form(A:form,B:form, C:form, D:form):form->form
= " (\A,(\B),(\C),(\D)) "
-> " (\C,(\B),(\D)) ";
You can get a better feel for this by looking at Algebra as a DMS domain.
If your goal is to implement all this in Python... I don't have much help.
DMS is a pretty big system, and it would be a lot of effort to replicate.