I came across this error
def test_rec():
import ast
exec(compile(ast.fix_missing_locations(ast.parse("""
def fact(n):
return 1 if n == 0 else n * fact(n - 1)
print(fact(5))
"""), "<string>", "exec")))
This yield this error, which is weird
Traceback (most recent call last):
File "/Users/gecko/.pyenv/versions/3.9.0/envs/lampy/lib/python3.9/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/Users/gecko/code/lampycode/tests/test_let_lang.py", line 6, in test_rec
exec(compile(ast.fix_missing_locations(ast.parse("""
File "<string>", line 4, in <module>
File "<string>", line 3, in fact
NameError: name 'fact' is not defined
If I copy and paste the same code in REPL it works fine
>>> def fact(n):
... return 1 if n == 0 else n * fact(n - 1)
...
>>> print(fact(5))
120
>>>
Any ideas?
I could reduce the problem further here is the minimal exempla, this would overflow the stack but it gives me the same not defined error
def test_rec3():
exec("""
def f():
f()
f()
""")
--
Second edit, going even further, this only happens inside functions
This works
exec("""
def f(n):
print("end") if n == 1 else f(n-1)
f(10)""")
But this gives me the same error as above
def foo():
exec("""
def f(n):
print("end") if n == 1 else f(n-1)
f(10)""")
foo()
If you use exec with the default locals, then binding local variables is undefined behavior. That includes def, which binds the new function to a local variable.
Also, functions defined inside exec can't access closure variables, which fact would be.
The best way to avoid these problems is to not use exec. The second best way is to provide an explicit namespace:
namespace = {}
exec(whatever, namespace)
Related
I have defined the following functions
def f(x):
return x*a
def g(x,a):
return f(x)
g(1,2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in g
File "<stdin>", line 2, in f
NameError: name 'a' is not defined
Now if I try to evaluate g(x,a) for any value of x and a it states that a is not defined. I suspect this is because a should be a global variable.
I have heard that using global variables is bad practice so my question is how do I make g(x,a) give a result with a given as an argument?
Note: The reason I am not giving a as an argument to f(x) is because it needs to be solved as a differential equation (using scipy) with only the relevant variables as arguments.
Since a is a variable within the function g, why not avoid the entire global situation by the following:
def f(x):
# do something to alter x
return x
def g(x, a):
return f(x)*a
g(1,2) # returns f(x) *2
In python3, if a function with recursive invoking is injected into exec() in a function, I got an error.
For example, below code
def B(pys):
exec(pys)
pys="""
def fibonacci(n):
if n == 1 or n == 2:
r = 1
else:
r = fibonacci(n - 1) + fibonacci(n - 2)
return r
print(fibonacci(3))
"""
B(pys)
will raise NameError.
$ py -3.8 testrecursivefun.py
Traceback (most recent call last):
File "testrecursivefun.py", line 14, in <module>
B(pys)
File "testrecursivefun.py", line 2, in B
exec(pys)
File "<string>", line 9, in <module>
File "<string>", line 6, in fibonacci
NameError: name 'fibonacci' is not defined
If I run exec(pys) directly under the module, the exception disappeared.
The reason has been described in another question How does exec work with locals?. But I still don't know how I can figure out the recursive invoking in exec(). Because the function name is dynamic for me. I cannot add it to locals() to exec(). Who can help me figure it out.
For the sake of an answer, you can wrap your code in a function so the recursive function is in its local scope:
import textwrap
def B(pys):
exec(pys, globals(), {})
pys="""
def fibonacci(n):
if n == 1 or n == 2:
r = 1
else:
r = fibonacci(n - 1) + fibonacci(n - 2)
return r
print(fibonacci(11))
"""
def wrap(s):
return "def foo():\n" \
"{}\n" \
"foo()".format(textwrap.indent(s, ' ' * 4))
B(wrap(pys))
Generally, reconsider using exec.
I actually got interested in your question, so I started researching on this topic. Seems like the simple solution to your problem is to:
First compile the string to code using compile function in python
Then execute the compiled code using exec function
Here is the sample solution:
psy="""
def fibonacci(n):
if n == 1 or n == 2:
r = 1
else:
r = fibonacci(n - 1) + fibonacci(n - 2)
return r
print(fibonacci(3))
"""
def B(psy):
code = compile(psy, '<string>', 'exec')
exec(code, globals())
B(psy)
Here compile takes three parameters:
First is the code in string format, Second is the filename hint which we used as we take string as code itself, and third can be one of 'exec', 'eval' and 'single'.
This link contains detail explanation of how you should use exec and eval in python. Do check them out for detail explanation.
I'm trying out the python timeit function in my Python REPL. It can time small pieces of code in two ways: Either as a callable, or as a quoted expression. I'd like to know why the following code produces different timing results.
>>> import timeit
>>> timeit.timeit("lambda *args: None")
0.058281898498535156
>>> timeit.timeit(lambda *args: None)
0.0947730541229248
>>>
My intuition tells me that there should be more 'overhead' associated with the quoted string variant because it requires interpretation, but this does not appear to be the case. But apparently my intuition is mistaken..
Here's another code snippet. There does not appear a huge time difference between invoking the callable function vs. timing the quoted function statement:
>>> def costly_func():
... return list(map(lambda x: x^2, range(10)))
...
>>> import timeit
>>> timeit.timeit(costly_func)
2.421797037124634
>>> timeit.timeit("list(map(lambda x: x^2, range(10)))")
2.3588619232177734
Observe:
>>> def costly():
... return list(map(str, list(range(1_000_000))))
...
>>> timeit.timeit(costly, number=100)
30.65105245400082
>>> timeit.timeit('costly', number=1_000_000_000, globals=globals())
27.45540758000061
Look at the number argument. It took 30 seconds to execute the function costly 100 times. It took almost 30 seconds to execute the expression costly 1'000'000'000 (!) times.
Why? Because the second code does not execute the function costly! The only thing it executes is the expression costly: notice the lack of parentheses, which means it's not a function call. The expression costly is basically a no-op (well, it just requires checking whether the name "costly" exists in the current scope, that's all), that's why it's so fast, and if Python was smart enough to optimise it away, the execution of the expression costly (not costly()!) would be instantaneous!
In your case, saying lambda *args: None is simply defining an anonymous function, right? When you execute this exact code, a new function is created, but not executed (in order to do that, you should call it: (lambda *args: None)()).
So, timing the string "lambda *args: None" with timeit.timeit("lambda *args: None") basically tests how fast Python can spit out new anonymous functions.
Timing the function itself with timeit.timeit(lambda *args: None) tests how fast Python can execute an existing function.
Spitting out newly created functions is a piece of cake, while actually running them can be really hard.
Take this code for example:
def Ackermann(m, n):
if m == 0:
return n + 1
if m > 0:
if n == 0:
return Ackermann(m - 1, 1)
elif n > 0:
return Ackermann(m - 1, Ackermann(m, n - 1))
If you put that exact code in a string and timeit it, you'll get something like this:
>>> code = """def Ackermann(m, n):
... if m == 0:
... return 0
... if m > 0:
... if n == 0:
... return Ackermann(m - 1, 1)
... elif n > 0:
... return Ackermann(m - 1, Ackermann(m, n - 1))"""
>>> timeit.timeit(code, number=1_000_000)
0.10481472999890684
Now try to timeit the function itself:
>>> timeit.timeit(lambda : Ackermann(6, 4), number=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/timeit.py", line 232, in timeit
return Timer(stmt, setup, timer, globals).timeit(number)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/timeit.py", line 176, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
File "<stdin>", line 1, in <lambda>
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 8, in Ackermann
[Previous line repeated 1 more time]
File "<stdin>", line 6, in Ackermann
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 6, in Ackermann
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 8, in Ackermann
[Previous line repeated 983 more times]
File "<stdin>", line 6, in Ackermann
File "<stdin>", line 2, in Ackermann
RecursionError: maximum recursion depth exceeded in comparison
See - you can't even run that! Actually, probably nobody can since it's so much recursion!
Why did the first call succeed, though? Because it didn't execute anything, it just spit out a lot of new functions and got rid of all of them shortly after.
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger']
>>> show('tiger','cat')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (2 given)
>>> show('tiger','cat', {'name':'tom'})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (3 given)
Since the method append of alist only accepts one argument, why not detect a syntax error on the line alist.append(*args, **kwargs) in the definition of the method show?
It's not a syntax error because the syntax is perfectly fine and that function may or may not raise an error depending on how you call it.
The way you're calling it:
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger']
>>> show('tiger','cat')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (2 given)
A different way:
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger', 'tiger']
>>> class L: pass
...
>>> alist = L()
>>> alist.append = print
>>> show('tiger','cat')
tiger cat
<__main__.L object at 0x000000A45DBCC048>
Python objects are strongly typed. The names that bind to them are not. Nor are function arguments. Given Python's dynamic nature it would be extremely difficult to statically predict what type a variable at a given source location will be at execution time, so the general rule is that Python doesn't bother trying.
In your specific example, alist is not in the local scope. Therefore it can be modified after your function definition was executed and the changes will be visible to your function, cf. code snippets below.
So, in accord with the general rule: predicting whether or not alist will be a list when you call .append? Near-impossible. In particular, the interpreter cannot predict that this will be an error.
Here is some code just to drive home the point that static type checking is by all practical means impossible in Python. It uses non-local variables as in your example.
funcs = []
for a in [1, "x", [2]]:
def b():
def f():
print(a)
return f
funcs.append(b())
for f in funcs:
f()
Output:
[2] # value of a at definition time (of f): 1
[2] # value of a at definition time (of f): 'x'
[2] # value of a at definition time (of f): [2]
And similarly for non-global non-local variables:
funcs = []
for a in [1, "x", [2]]:
def b(a):
def f():
print(a)
a = a+a
return f
funcs.append(b(a))
for f in funcs:
f()
Output:
2 # value of a at definition time (of f): 1
xx # value of a at definition time (of f): 'x'
[2, 2] # value of a at definition time (of f): [2]
It's not a syntax error because it's resolved at runtime. Syntax errors are caught initially during parsing. Things like unmatched brackets, undefined variable names, missing arguments (this is not a missing argument *args means any number of arguments).
show has no way of knowing what you'll pass it at runtime and since you are expanding your args variable inside show, there could be any number of arguments coming in and it's valid syntax! list.append takes one argument! One tuple, one list, one int, string, custom class etc. etc. What you are passing it is some number elements depending on input. If you remove the * it's all dandy as its one element e.g. alist.append(args).
All this means that your show function is faulty. It is equipped to handle args only when its of length 1. If its 0 you also get a TypeError at the point append is called. If its more than that its broken, but you wont know until you run it with the bad input.
You could loop over the elements in args (and kwargs) and add them one by one.
alist = []
def show(*args, **kwargs):
for a in args:
alist.append(a)
for kv in kwargs.items():
alist.append(kv)
print(alist)
Just for the sake of curiosity I wanna know this..
I know scope of inner function is limited to outer function body only, but still is there any way so that we can access the inner function variable outside its scope or call the inner function outside its scope ?
In [7]: def main():
...: def sub():
...: a=5
...: print a
...:
In [8]: main()
In [9]: main.sub()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/home/dubizzle/webapps/django/dubizzle/<ipython-input-9-3920726955bd> in <module>()
----> 1 main.sub()
AttributeError: 'function' object has no attribute 'sub'
In [10]:
>>> def main():
... def sub():
... a=5
... print a
...
>>> main.__code__.co_consts
(None, <code object sub at 0x2111ad0, file "<stdin>", line 2>)
>>> exec main.__code__.co_consts[1]
5
You can if you return the inner function as a value
>>> def main():
... def sub():
... a = 5
... print a
... return sub
...
>>> inner = main()
>>> inner()
5
or you can attach it to main as a property (functions are objects after all):
>>> def main():
... def sub():
... a = 5
... print a
... main.mysub = sub
...
>>> main()
>>> main.mysub()
5
but you better document your very good reason for doing this, since it will almost certainly surprise anyone reading your code :-)
No, you can't. The inner function is not an attribute of the outer function.
The inner function only exists after its def statement is executed (while the outer function is executed), and it stops to exist when the function exits.
You could return the inner function, of course.
A function is just another object in Python and can be introspected.
You can get the outer function body at runtime and parse/eval it to make the function available in the current namespace.
>>> import inspect
>>> def outer():
def inner():
print "hello!"
>>> inspect.getsourcelines(outer)
([u'def outer():\n', u' def inner():\n', u' print "hello!"\n'], 1)
Not really the same thing as calling outer.inner(), but if you are not making the inner function explicitly available outside the scope of the outer function, I guess it is the the only possibility.
For example, a very naive eval attempt could be:
>>> exec('\n'.join([ line[4:] for line in inspect.getsourcelines(outer)[0][1:] ]))
>>> inner()
hello!
An inner function is just a local variable like any other so the same rules apply. If you want to access it you have to return it.