How to convert float to Decimal while using eval? - python

I'm new to programming so I thought I'd ask here for help.
So when I use:
eval('12.5 + 3.2'),
it converts 12.5 and 3.2 into floats.
But I want them to be converted into the Decimal datatype.
I can use:
from decimal import Decimal
eval(Decimal(12.5) + Decimal(3.2))
But I can't do that in my program as I'm accepting user input.
I've found a solution but it uses regular expressions, which I'm not familiar with right now (and I can't find it again for some reason).
It would be great if someone could help me out. Thanks!

UPDATE: apparently the official docs has a recipe that does exactly what you're looking for. From https://docs.python.org/3/library/tokenize.html#examples:
from tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP
from io import BytesIO
def decistmt(s):
"""Substitute Decimals for floats in a string of statements.
>>> from decimal import Decimal
>>> s = 'print(+21.3e-5*-.1234/81.7)'
>>> decistmt(s)
"print (+Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7'))"
The format of the exponent is inherited from the platform C library.
Known cases are "e-007" (Windows) and "e-07" (not Windows). Since
we're only showing 12 digits, and the 13th isn't close to 5, the
rest of the output should be platform-independent.
>>> exec(s) #doctest: +ELLIPSIS
-3.21716034272e-0...7
Output from calculations with Decimal should be identical across all
platforms.
>>> exec(decistmt(s))
-3.217160342717258261933904529E-7
"""
result = []
g = tokenize(BytesIO(s.encode('utf-8')).readline) # tokenize the string
for toknum, tokval, _, _, _ in g:
if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens
result.extend([
(NAME, 'Decimal'),
(OP, '('),
(STRING, repr(tokval)),
(OP, ')')
])
else:
result.append((toknum, tokval))
return untokenize(result).decode('utf-8')
Which you can then use like so:
from decimal import Decimal
s = "12.5 + 3.2 + 1.0000000000000001 + (1.0 if 2.0 else 3.0)"
s = decistmt(s)
print(s)
print(eval(s))
Result:
Decimal ('12.5')+Decimal ('3.2')+Decimal ('1.0000000000000001')+(Decimal ('1.0')if Decimal ('2.0')else Decimal ('3.0'))
17.7000000000000001
Feel free to skip the rest of this answer, which is now only of interest to historians of half-correct solutions.
As far as I know, there's no easy way to "hook into" eval in order to change how it interprets float objects.
But if we use the ast module to convert your string into an abstract syntax tree before evaling it, then we can manipulate the tree to replace the floats with Decimal calls.
import ast
from decimal import Decimal
def construct_decimal_node(value):
return ast.Call(
func = ast.Name(id="Decimal", ctx=ast.Load()),
args = [value],
keywords = []
)
return expr
class FloatLiteralReplacer(ast.NodeTransformer):
def visit_Num(self, node):
return construct_decimal_node(node)
s = '12.5 + 3.2'
node = ast.parse(s, mode="eval")
node = FloatLiteralReplacer().visit(node)
ast.fix_missing_locations(node) #add diagnostic information to the nodes we created
code = compile(node, filename="", mode="eval")
result = eval(code)
print("The type of the result of this expression is:", type(result))
print("The result of this expression is:", result)
Result:
The type of the result of this expression is: <class 'decimal.Decimal'>
The result of this expression is: 15.70000000000000017763568394
As you can see, the result is identical to what you would have gotten if you had calculated Decimal(12.5) + Decimal(3.2) directly.
But perhaps you're thinking "Why isn't the result 15.7?". This is because Decimal(3.2) is not exactly identical to 3.2. It's actually equal to 3.20000000000000017763568394002504646778106689453125. This is a hazard when it comes to initializing decimals using float objects -- the inaccuracy is already present. Better to use strings to create decimals, e.g. Decimal("3.2").
Maybe you're now thinking "Ok, so how do I turn 12.5 + 3.2 into Decimal("12.5") + Decimal("3.2")?". The quickest approach would be to modify construct_decimal_node so the Call's args is an ast.Str rather than an ast.Num:
import ast
from decimal import Decimal
def construct_decimal_node(value):
return ast.Call(
func = ast.Name(id="Decimal", ctx=ast.Load()),
args = [ast.Str(str(value.n))],
keywords = []
)
return expr
class FloatLiteralReplacer(ast.NodeTransformer):
def visit_Num(self, node):
return construct_decimal_node(node)
s = '12.5 + 3.2'
node = ast.parse(s, mode="eval")
node = FloatLiteralReplacer().visit(node)
ast.fix_missing_locations(node) #add diagnostic information to the nodes we created
code = compile(node, filename="", mode="eval")
result = eval(code)
print("The type of the result of this expression is:", type(result))
print("The result of this expression is:", result)
Result:
The type of the result of this expression is: <class 'decimal.Decimal'>
The result of this expression is: 15.7
But take care: while I expect this approach to return good results most of the time, there is a corner case where it returns surprising results. In particular, when the expression contains a float f such that float(str(f)) != f. In other words, when the printed representation of the float lacks the precision necessary to represent the float exactly.
For example, if you changed s in the above code to "1.0000000000000001 + 0", the result would be 1.0. This is incorrect, since the result of Decimal("1.0000000000000001") + Decimal("0") is 1.0000000000000001.
I'm not sure how you could prevent this problem... By the time ast.parse has finished executing, the float literal has already been converted into a float object, and there's no obvious way to retrieve the string that was used to create it. Perhaps you could extract it from the expression string, but you'd basically have to reinvent Python's parser to do that.

Related

How to exactly print a big number of decimal object in python3?

I want to print a big number of decimal object in python3.6
import decimal
a = decimal.Decimal('0.0')
for idx in range(10):
a += decimal.Decimal('1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111111')
print(a)
The execution result I want is below.
1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111111
2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222.22222222222222222222
3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333.33333333333333333333
4444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444.44444444444444444444
5555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555.55555555555555555555
6666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666.66666666666666666666
7777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777.77777777777777777777
8888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888.88888888888888888888
9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999.99999999999999999999
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111110
But the actual execution result is below.
1.111111111111111111111111111E+99
2.222222222222222222222222222E+99
3.333333333333333333333333333E+99
4.444444444444444444444444444E+99
5.555555555555555555555555555E+99
6.666666666666666666666666666E+99
7.777777777777777777777777777E+99
8.888888888888888888888888888E+99
9.999999999999999999999999999E+99
1.111111111111111111111111111E+100
How can I get the results I want?
You can use string formatting, but before that you have to set precision (the default for Decimals is 28 places)
import decimal
a = decimal.Decimal('0.0')
with decimal.localcontext() as ctx:
ctx.prec = 120
for idx in range(10):
a += decimal.Decimal('1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111111')
print('{:.20f}'.format(a))
Prints:
1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111111
2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222.22222222222222222222
3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333.33333333333333333333
4444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444.44444444444444444444
5555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555.55555555555555555555
6666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666.66666666666666666666
7777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777.77777777777777777777
8888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888.88888888888888888888
9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999.99999999999999999999
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111110

Python:16 bit one's complement addition implementation

I implemented one's complement addition of 16 bit integers in python, however I am trying to see if there is a better way to do it.
# This function returns a string of the bits (exactly 16 bits)
# for the number (in base 10 passed to it)
def get_bits(some_num):
binar = bin(some_num)[2::]
zeroes = 16 - len(binar)
padding = zeroes*"0"
binar = padding + binar
return binar
# This function adds the numbers, and handles the carry over
# from the most significant bit
def add_bits(num1, num2):
result = bin(int(num1,2) + int(num2,2))[2::]
# There is no carryover
if len(result) <= 16 :
result = get_bits(int(result,2))
# There is carryover
else :
result = result[1::]
one = '0000000000000001'
result = bin(int(result,2) + int(one,2))[2::]
result = get_bits(int(result,2))
return result
And now an example of running it would be:
print add_bits("1010001111101001", "1000000110110101")
returns :
0010010110011111
Is what wrote safe as far as results (Note I didn't do any negation here since that part is trivial, I am more interested in the intermediate steps)? Is there a better pythonic way to do it?
Thanks for any help.
Converting back and forth between string and ints to do math is inefficient. Do the math in integers and use formatting to display binary:
MOD = 1 << 16
def ones_comp_add16(num1,num2):
result = num1 + num2
return result if result < MOD else (result+1) % MOD
n1 = 0b1010001111101001
n2 = 0b1000000110110101
result = ones_comp_add16(n1,n2)
print('''\
{:016b}
+ {:016b}
------------------
{:016b}'''.format(n1,n2,result))
Output:
1010001111101001
+ 1000000110110101
------------------
0010010110011111
Converting back and forth between numbers, lists of one-bit strings, and strings probably doesn't feel like a very Pythonic way to get started.
More specifically, converting an int to a sequence of bits by using bin(i)[2:] is pretty hacky. It may be worth doing anyway (e.g., because it's more concise or more efficient than doing it numerically), but even if it is, it would be better to wrap it in a function named for what it does (and maybe even add a comment explaining why you did it that way).
You've also got unnecessarily complexifying code in there. For example, to do the carry, you do this:
one = '0000000000000001'
result = bin(int(result,2) + int(one,2))[2::]
But you know that int(one,2) is just the number 1, unless you've screwed up, so why not just use 1, which is shorter, more readable and obvious, and removes any chance of screwing up?
And you're not following PEP 8 style.
So, sticking with your basic design of "use a string for the bits, use only the basic string operations that are unchanged from Python 1.5 through 3.5 instead of format, and do the basic addition on integers instead of on the bits", I'd write it something like this:
def to_bits(n):
return bin(n)[2:]
def from_bits(n):
return int(n, 2)
def pad_bits(b, length=16):
return ["0"*length + b][-length:]
def add_bits(num1, num2):
result = to_bits(from_bits(num1) + from_bits(num2))
if len(result) <= 16: # no carry
return pad_bits(result)
return pad_bits(to_bits(from_bits(result[1:]) + 1))
But an even better solution would be to abstract out the string representation completely. Build a class that knows how to act like an integer, but also knows how to act like a sequence of bits. Or just find one on PyPI. Then your code becomes trivial. For example:
from bitstring import BitArray
def add_bits(n1, n2):
"""
Given two BitArray values of the same length, return a BitArray
of the same length that's the one's complement addition.
"""
result = n1.uint + n2.uint
if result >= (1 << n1.length):
result = result % n1.length + 1
return BitArray(uint=result, length=n1.length)
I'm not sure that bitstring is actually the best module for what you're doing. There are a half-dozen different bit-manipulating libraries on PyPI, all of which have different interfaces and different strengths and weaknesses; I just picked the first one that came up in a search and slapped together an implementation using it.

Significant numbers digits of value by its error

I'm in need of a function returning only the significant part of a value with respect to a given error. Meaning something like this:
def (value, error):
""" This function takes a value and determines its significant
accuracy by its error.
It returns only the scientific important part of a value and drops the rest. """
magic magic magic....
return formated value as String.
What i have written so far to show what I mean:
import numpy as np
def signigicant(value, error):
""" Returns a number in a scintific format. Meaning a value has an error
and that error determines how many digits of the
value are signifcant. e.g. value = 12.345MHz,
error = 0.1MHz => 12.3MHz because the error is at the first digit.
(in reality drop the MHz its just to show why.)"""
xx = "%E"%error # I assume this is most ineffective.
xx = xx.split("E")
xx = int(xx[1])
if error <= value: # this should be the normal case
yy = np.around(value, -xx)
if xx >= 0: # Error is 1 or bigger
return "%i"%yy
else: # Error is smaller than 1
string = "%."+str(-xx) +"f"
return string%yy
if error > value: # This should not be usual but it can happen.
return "%g"%value
What I don't want is a function like numpys around or round. Those functions take a value and want to know what part of this value is important. The point is that in general I don't know how many digits are significant. It depends in the size of the error of that value.
Another example:
value = 123, error = 12, => 120
One can drop the 3, because the error is at the size of 10. However this behaviour is not so important, because some people still write 123 for the value. Here it is okay but not perfectly right.
For big numbers the "g" string operator is a usable choice but not always what I need. For e.g.
If the error is bigger than the value.( happens e.g. when someone wants to measure something that does not exist.)
value = 10, error = 100
I still wish to keep the 10 as the value because I done know it any better. The function should return 10 then and not 0.
The stuff I have written does work more or less, but its clearly not effective or elegant in any way. Also I assume this question does concern hundreds of people because every scientist has to format numbers in that way. So I'm sure there is a ready to use solution somewhere but I haven't found it yet.
Probably my google skills aren't good enough but I wasn't able to find a solution to this in two days and now I ask here.
For testing my code I used this the following but more is needed.
errors = [0.2,1.123,1.0, 123123.1233215,0.123123123768]
values = [12.3453,123123321.4321432, 0.000321 ,321321.986123612361236,0.00001233214 ]
for value, error in zip(values, errors):
print "Teste Value: ",value, "Error:", error
print "Result: ", signigicant(value, error)
import math
def round_on_error(value, error):
significant_digits = 10**math.floor(math.log(error, 10))
return value // significant_digits * significant_digits
Example:
>>> errors = [0.2,1.123,1.0, 123123.1233215,0.123123123768]
>>> values = [12.3453,123123321.4321432, 0.000321 ,321321.986123612361236,0.00001233214 ]
>>> map(round_on_error, values, errors)
[12.3, 123123321.0, 0.0, 300000.0, 0.0]
And if you want to keep a value that is inferior to its error
if (value < error)
return value
else
def round_on_error(value, error):
significant_digits = 10**math.floor(math.log(error, 10))
return value // significant_digits * significant_digits

Python: Function takes 1 argument for 2 given

I have looked on this website for something similar, and attempted to debug using previous answers, and failed.
I'm testing (I did not write this module) a module that changes the grade value of a course's grades from a B- to say a B, but never going across base grade levels (ie, B+ to an A-).
The original module is called transcript.py
I'm testing it in my own testtranscript.py
I'm testing that module by importing it: 'import transcript' and 'import cornelltest'
I have ensured that all files are in the same folder/directory.
There is the function raise_grade present in transcript.py (there are multiple definitions in this module, but raise_grade is the only one giving me any trouble).
ti is in the form ('class name', 'gradvalue')
There's already another definition converting floats to strings and back (ie 3.0--> B).
def raise_grade(ti):
""""Raise gradeval of transcript line ti by a non-noticeable amount.
"""
# value of the base letter grade, e.g., 4 (or 4.0) for a 4.3
bval = int(ti.gradeval)
print 'bval is:"' + str(bval) + '"'
# part after decimal point in raised grade, e.g., 3 (or 3.0) for a 4.3
newdec = min(int((ti.gradeval + .3)*10) % 10, 3)
print 'newdec is:"' + str(newdec) + '"'
# get result by add the two values together, after shifting newdec one
# decimal place
newval = bval + round(newdec/10.0, 1)
ti.gradeval = newval
print 'newval is:"' + str(newval) + '"'
I will probably get rid of the print later.
When I run testtranscript, which imports transcript:
def test_raise():
"""test raise_grade"""
testobj = transcript.Titem('CS1110','B-')
transcript.raise_grade('CS1110','B-')
cornelltest.assert_floats_equal(3.0,transcript.lettergrade_to_val("B-"))
I get this from the cmd shell:
TypeError: raise_grade takes exactly 1 argument (2 given)
Edit1: So now I see that I am giving it two parameters when raise_grade(ti) is just one, but perhaps it would shed more light if I just put out the rest of the code. I'm still stuck as to why I get a ['str' object has no gradeval error]
LETTER_LIST = ['B', 'A']
# List of valid modifiers to base letter grades.
MODIFIER_LIST = ['-','+']
def lettergrade_to_val(lg):
"""Returns: numerical value of letter grade lg.
The usual numerical scheme is assumed: A+ -> 4.3, A -> 4.0, A- -> 3.7, etc.
Precondition: lg is a 1 or 2-character string consisting of a "base" letter
in LETTER_LIST optionally followed by a modifier in MODIFIER_LIST."""
# if LETTER_LIST or MODIFIER_LIST change, the implementation of
# this function must change.
# get value of base letter. Trick: index in LETTER_LIST is shifted from value
bv = LETTER_LIST.index(lg[0]) + 3
# Trick with indexing in MODIFIER_LIST to get the modifier value
return bv + ((MODIFIER_LIST.index(lg[1]) - .5)*.3/.5 if (len(lg) == 2) else 0)
class Titem(object):
"""A Titem is an 'item' on a transcript, like "CS1110 A+"
Instance variables:
course [string]: course name. Always at least 1 character long.
gradeval [float]: the numerical equivalent of the letter grade.
Valid letter grades are 1 or 2 chars long, and consist
of a "base" letter in LETTER_LIST optionally followed
by a modifier in MODIFIER_LIST.
We store values instead of letter grades to facilitate
calculations of GPA later.
(In "real" life, one would write a function that,
when displaying a Titem, would display the letter
grade even though the underlying representation is
numerical, but we're keeping things simple for this
lab.)
"""
def __init__(self, n, lg):
"""Initializer: A new transcript line with course (name) n, gradeval
the numerical equivalent of letter grade lg.
Preconditions: n is a non-empty string.
lg is a string consisting of a "base" letter in LETTER_LIST
optionally followed by modifier in MODIFIER_LIST.
"""
# assert statements that cause an error when preconditions are violated
assert type(n) == str and type(lg) == str, 'argument type error'
assert (len(n) >= 1 and 0 < len(lg) <= 2 and lg[0] in LETTER_LIST and
(len(lg) == 1 or lg[1] in MODIFIER_LIST)), 'argument value error'
self.course = n
self.gradeval = lettergrade_to_val(lg)
Edit2: I understand the original problem... but it seems that the original writer screwed up the code, since raise_grade doesn't work properly for grade values at 3.7 ---> 4.0, since bval takes the original float and makes it an int, which doesn't work in this case.
You are calling the function incorrectly, you should be passing the testobj:
def test_raise():
"""test raise_grade"""
testobj = transcript.Titem('CS1110','B-')
transcript.raise_grade(testobj)
...
The raise_grade function is expecting a single argument ti which has a gradeval attribute, i.e. a Titem instance.

Can I write a function that carries out symbolic calculations in Python 2.7?

I'm currently transitioning from Java to Python and have taken on the task of trying to create a calculator that can carry out symbolic operations on infix-notated mathematical expressions (without using custom modules like Sympy). Currently, it's built to accept strings that are space delimited and can only carry out the (, ), +, -, *, and / operators. Unfortunately, I can't figure out the basic algorithm for simplifying symbolic expressions.
For example, given the string '2 * ( ( 9 / 6 ) + 6 * x )', my program should carry out the following steps:
2 * ( 1.5 + 6 * x )
3 + 12 * x
But I can't get the program to ignore the x when distributing the 2. In addition, how can I handle 'x * 6 / x' so it returns '6' after simplification?
EDIT: To clarify, by "symbolic" I meant that it will leave letters like "A" and "f" in the output while carrying out the remaining calculations.
EDIT 2: I (mostly) finished the code. I'm posting it here if anyone stumbles on this post in the future, or if any of you were curious.
def reduceExpr(useArray):
# Use Python's native eval() to compute if no letters are detected.
if (not hasLetters(useArray)):
return [calculate(useArray)] # Different from eval() because it returns string version of result
# Base case. Returns useArray if the list size is 1 (i.e., it contains one string).
if (len(useArray) == 1):
return useArray
# Base case. Returns the space-joined elements of useArray as a list with one string.
if (len(useArray) == 3):
return [' '.join(useArray)]
# Checks to see if parentheses are present in the expression & sets.
# Counts number of parentheses & keeps track of first ( found.
parentheses = 0
leftIdx = -1
# This try/except block is essentially an if/else block. Since useArray.index('(') triggers a KeyError
# if it can't find '(' in useArray, the next line is not carried out, and parentheses is not incremented.
try:
leftIdx = useArray.index('(')
parentheses += 1
except Exception:
pass
# If a KeyError was returned, leftIdx = -1 and rightIdx = parentheses = 0.
rightIdx = leftIdx + 1
while (parentheses > 0):
if (useArray[rightIdx] == '('):
parentheses += 1
elif (useArray[rightIdx] == ')'):
parentheses -= 1
rightIdx += 1
# Provided parentheses pair isn't empty, runs contents through again; else, removes the parentheses
if (leftIdx > -1 and rightIdx - leftIdx > 2):
return reduceExpr(useArray[:leftIdx] + [' '.join(['(',reduceExpr(useArray[leftIdx+1:rightIdx-1])[0],')'])] + useArray[rightIdx:])
elif (leftIdx > -1):
return reduceExpr(useArray[:leftIdx] + useArray[rightIdx:])
# If operator is + or -, hold the first two elements and process the rest of the list first
if isAddSub(useArray[1]):
return reduceExpr(useArray[:2] + reduceExpr(useArray[2:]))
# Else, if operator is * or /, process the first 3 elements first, then the rest of the list
elif isMultDiv(useArray[1]):
return reduceExpr(reduceExpr(useArray[:3]) + useArray[3:])
# Just placed this so the compiler wouldn't complain that the function had no return (since this was called by yet another function).
return None
You need much more processing before you go into operations on symbols. The form you want to get to is a tree of operations with values in the leaf nodes. First you need to do a lexer run on the string to get elements - although if you always have space-separated elements it might be enough to just split the string. Then you need to parse that array of tokens using some grammar you require.
If you need theoretical information about grammars and parsing text, start here: http://en.wikipedia.org/wiki/Parsing If you need something more practical, go to https://github.com/pyparsing/pyparsing (you don't have to use the pyparsing module itself, but their documentation has a lot of interesting info) or http://www.nltk.org/book
From 2 * ( ( 9 / 6 ) + 6 * x ), you need to get to a tree like this:
*
2 +
/ *
9 6 6 x
Then you can visit each node and decide if you want to simplify it. Constant operations will be the simplest ones to eliminate - just compute the result and exchange the "/" node with 1.5 because all children are constants.
There are many strategies to continue, but essentially you need to find a way to go through the tree and modify it until there's nothing left to change.
If you want to print the result then, just walk the tree again and produce an expression which describes it.
If you are parsing expressions in Python, you might consider Python syntax for the expressions and parse them using the ast module (AST = abstract syntax tree).
The advantages of using Python syntax: you don't have to make a separate language for the purpose, the parser is built in, and so is the evaluator. Disadvantages: there's quite a lot of extra complexity in the parse tree that you don't need (you can avoid some of it by using the built-in NodeVisitor and NodeTransformer classes to do your work).
>>> import ast
>>> a = ast.parse('x**2 + x', mode='eval')
>>> ast.dump(a)
"Expression(body=BinOp(left=BinOp(left=Name(id='x', ctx=Load()), op=Pow(),
right=Num(n=2)), op=Add(), right=Name(id='x', ctx=Load())))"
Here's an example class that walks a Python parse tree and does recursive constant folding (for binary operations), to show you the kind of thing you can do fairly easily.
from ast import *
class FoldConstants(NodeTransformer):
def visit_BinOp(self, node):
self.generic_visit(node)
if isinstance(node.left, Num) and isinstance(node.right, Num):
expr = copy_location(Expression(node), node)
value = eval(compile(expr, '<string>', 'eval'))
return copy_location(Num(value), node)
else:
return node
>>> ast.dump(FoldConstants().visit(ast.parse('3**2 - 5 + x', mode='eval')))
"Expression(body=BinOp(left=Num(n=4), op=Add(), right=Name(id='x', ctx=Load())))"

Categories

Resources