I have a question that I am sure has been on the mind of every intermediate-level Python programmer at some point: that is, how to fix/prevent/avoid/work around those ever-so-persistent and equally frustrating NameErrors. I'm not talking about actual errors (like typos, etc.), but a bizarre problem that basically say a global name was not defined, when in reality it was defined further down. For whatever reason, Python seems to be extremely needy in this area: every single variable absolutely positively has to hast to be defined above and only above anything that refers to it (or so it seems).
For example:
condition = True
if condition == True:
doStuff()
def doStuff():
it_worked = True
Causes Python to give me this:
Traceback (most recent call last):
File "C:\Users\Owner\Desktop\Python projects\test7.py", line 4, in <module>
doStuff()
NameError: name 'doStuff' is not defined
However, the name WAS defined, just not where Python apparently wanted it. So for a cheesy little function like doStuff() it's no big deal; just cut and paste the function into an area that satisfies the system's requirement for a certain order. But when you try to actually design something with it it makes organizing code practically impossible (I've had to "un-organize" tons of code to accomodate this bug). I have never encountered this problem with any of the other languages I've written in, so it seems to be specific to Python... but anyway I've researched this in the docs and haven't found any solutions (or even potential leads to a possible solution) so I'd appreciate any tips, tricks, workarounds or other suggestions.
It may be as simple as learning a specific organizational structure (like some kind of "Pythonic" and very strategic approach to working around the bug), or maybe just use a lot of import statements so it'll be easier to organize those in a specific order that will keep the system from acting up...
Avoid writing code (other than declarations) at top-level, use a main() function in files meant to be executed directly:
def main():
condition = True
if condition:
do_stuff()
def do_stuff():
it_worked = True
if __name__ == '__main__':
main()
This way you only need to make sure that the if..main construct follows the main() function (e.g. place it at the end of the file), the rest can be in any order. The file will be fully parsed (and thus all the names defined in the module can be resolved) by the time main() is executed.
As a rule of thumb: For most cases define all your functions first and then use them later in your code.
It is just the way it is: every name has to be defined at the time it is used.
This is especially true at code being executed at top level:
func()
def func():
func2()
def func2():
print "OK"
func()
The first func() will fail, because it is not defined yet.
But if I call func() at the end, everything will be OK, although func2() is defined after func().
Why? Because at the time of calling, func2() exists.
In short, the code of func() says "Call whatever is defined as func2 at the time of calling".
In Python defining a function is an act which happens at runtime, not at compile time. During that act, the code compiled at compile time is assigned to the name of the function. This name then is a variable in the current scope. It can be overwritten later as any other variable can:
def f():
print 42
f() # will print 42
def f():
print 23
f() # will print 23
You can even assign functions like other values to variables:
def f():
print 42
g = 23
f() # will print 42
g # will print 23
f, g = g, f
f # will print 23
g() # will print 42
When you say that you didn't come across this in other languages, it's because the other languages you are referring to aren't interpreted as a script. Try similar things in bash for instance and you will find that things can be as in Python in other languages as well.
There are a few things to say about this:
If your code is so complex that you can't organize it in one file, think about using many files and import them into one smaller main file
I you put your function in a class it will work. example:
class test():
def __init__(self):
self.do_something()
def do_something(self):
print 'test'
As said in the comment from Volatility that is an characteristic of interpreted languages
Related
This question is similar to others asked on here, but after reading the answers I'm not grasping it and would appreciate further guidance.
While sketching new code I find myself adding a lot of statements like:
print('var=')
pprint(var)
It became tedious always writing that, so I thought I could make it into a function. Since I want to print the variable name on the preceding line, I tried:
def dbp(var):
eval('print(\'{0}=\')'.format(var))
eval('pprint({0})'.format(var))
so then I do do things like:
foo = 'bar'
dbp('foo')
which prints
foo=
'bar'
This is all great, but when I go to use it in a function things get messed up. For example, doing
def f():
a = ['123']
dbp('a')
f()
raises a NameError (NameError: name 'a' is not defined).
My expectation was that dbp() would have read access to anything in f()'s scope, but clearly it doesn't. Can someone explain why?
Also, better ways of printing a variable's name followed by its formatted contents are also appreciated.
You really should look at other ways to doing this.
The logging module is a really good habit to get into, and you can turn off and on debug output.
Python 3.6 has f'' strings so you would simplify this to:
pprint(f'var=\n{var}`)`
However, here's an example (not recommended) using locals():
In []:
def dbp(var, l):
print('{}='.format(var))
pprint(l[var])
def f():
a = 1
dbp('a', locals())
f()
Out[]:
a=
1
first of all, id like to say that eval is a high security risk for whoever is going to be running that code.
However, if you absolutely must, you can do this.
def dbp(var):
env = {'var': var}
# Adding global variables to the enviroment
env.update(globals())
eval("print('{0}=')".format(var))
eval('pprint(var)', env)
def f():
a = ['123']
dbp('a')
you can then do
>>> f()
a=
'a'
True or False
If a function is defined but never called, then Python automatically detects that and issues a warning
One of the issues with this is that functions in Python are first class objects. So their name can be reassigned. For example:
def myfunc():
pass
a = myfunc
myfunc = 42
a()
We also have closures, where a function is returned by another function and the original name goes out of scope.
Unfortunately it is also perfectly legal to define a function with the same name as an existing one. For example:
def myfunc(): # <<< This code is never called
pass
def myfunc():
pass
myfunc()
So any tracking must include the function's id, not just its name - although that won't help with closures, since the id could get reused. It also won't help if the __name__ attribute of the function is reassigned.
You could track function calls using a decorator. Here I have used the name and the id - the id on its own would not be readable.
import functools
globalDict = {}
def tracecall(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
global globalDict
key = "%s (%d)" % (f.__name__, id(f))
# Count the number of calls
if key in globalDict:
globalDict[key] += 1
else:
globalDict[key] = 1
return f(*args, **kwargs)
return wrapper
#tracecall
def myfunc1():
pass
myfunc1()
myfunc1()
#tracecall
def myfunc1():
pass
a = myfunc1
myfunc1 = 42
a()
print(globalDict)
Gives:
{'myfunc1 (4339565296)': 2, 'myfunc1 (4339565704)': 1}
But that only gives the functions that have been called, not those that have not!
So where to go from here? I hope you can see that the task is quite difficult given the dynamic nature of python. But I hope the decorator I show above could at least allow you to diagnose the way the code is used.
No it is not. Python is not detect this. If you want to detect which functions are called or not during the run time you can use global set in your program. Inside each function add function name to set. Later you can print your set content and check if the the function is called or not.
False. Ignoring the difficulty and overhead of doing this, there's no reason why it would be useful.
A function that is defined in a module (i.e. a Python file) but not called elsewhere in that module might be called from a different module, so that doesn't deserve a warning.
If Python were to analyse all modules that get run over the course of a program, and print a warning about functions that were not called, it may be that a function was not called because of the input in this particular run e.g. perhaps in a calculator program there is a "multiply" function but the user only asked to sum some numbers.
If Python were to analyse all modules that make up a program and note and print a warning about functions that could not possibly be called (this is impossible but stay with me here) then it would warn about functions that were intended for use in other programs. E.g. if you have two calculator programs, a simple one and an advanced one, maybe you have a central calc.py with utility functions, and then advanced functions like exp and log could not possibly be called when that's used as part of simple program, but that shouldn't cause a warning because they're needed for the advanced program.
So I started learning Python again, and I ran into an issue. Apparently you cannot call on method that is "below (in the editor)" the code that is calling it. For instance:
for check in lines:
if is_number(check):
print ("number: " + check)
else:
print ("String!" + check)
def is_number(s):
try:
float(s)
return True
except ValueError:
return False;
This causes an error (name is undefined), which makes sense. In C++ I know you can create a pointer for the function before you use it so the compiler knows what to look for, but how do I do this in python?
And is the method is_number a module? I hear lots of odd terminology being thrown around.
You should simply move the function above the place that calls it, or put the loop in a function of its own. The following works fine, because the name is_number inside check_lines is not resolved until the function is called.
def check_lines(lines):
for check in lines:
if is_number(check):
print ("number: " + check)
else:
print ("String!" + check)
def is_number(s):
try:
float(s)
return True
except ValueError:
return False;
check_lines(lines)
In my Python scripts, I always define the functions at the top, then put a few lines of code calling them at the bottom. This convention makes it easy to follow the control flow, because it's not interspersed with definitions, and it also makes it easier to later reuse your script as a module, which is similar to a Java package: just look at the "script" code near the bottom and remove that to get an importable module. (Or protect it with an if __name__ == '__main__' guard.)
Actually, Python does act like C. The difference is, in C, before anything in module is executed, the entire file is compiled. In Python, it's run.
So, the following will work, just as it will in C:
def f(x):
return f2(x) + 1
def f2(x):
return x*2
The function definitions just load the code - they don't check anything in there for existence. So even though f2 doesn't exist yet when f is first seen, when you actually call f, f2 does exist in the module scope and it works fine.
This doesn't work:
print f2(x) + 1
def f2(x):
return x*2
In this case, when the module is loaded, it is being interpreted. So if it hits a print, it will execute it immediately. This can be used to your benefit, but it does cause a problem here, since we'll lookup f2 immediately and try to dereference it, and it won't exist yet.
You just can't. You can't access something that has not been declared/defined. Which is what happens in every programming language.
Writing the code into main() function and adding the following line at the end of the code will make sure all the declared functions get loaded before start of the script.
if __name__=="__main__":
main()
I have been working at learning Python over the last week and it has been going really well, however I have now been introduced to custom functions and I sort of hit a wall. While I understand the basics of it, such as:
def helloworld():
print("Hello World!")
helloworld()
I know this will print "Hello World!".
However, when it comes to getting information from one function to another, I find that confusing. ie: function1 and function2 have to work together to perform a task. Also, when to use the return command.
Lastly, when I have a list or a dictionary inside of a function. I'll make something up just as an example.
def my_function():
my_dict = {"Key1":Value1,
"Key2":Value2,
"Key3":Value3,
"Key4":Value4,}
How would I access the key/value and be able to change them from outside of the function? ie: If I had a program that let you input/output player stats or a character attributes in a video game.
I understand bits and pieces of this, it just confuses me when they have different functions calling on each other.
Also, since this was my first encounter with the custom functions. Is this really ambitious to pursue and this could be the reason for all of my confusion? Since this is the most complex program I have seen yet.
Functions in python can be both, a regular procedure and a function with a return value. Actually, every Python's function will return a value, which might be None.
If a return statement is not present, then your function will be executed completely and leave normally following the code flow, yielding None as a return value.
def foo():
pass
foo() == None
>>> True
If you have a return statement inside your function. The return value will be the return value of the expression following it. For example you may have return None and you'll be explicitly returning None. You can also have return without anything else and there you'll be implicitly returning None, or, you can have return 3 and you'll be returning value 3. This may grow in complexity.
def foo():
print('hello')
return
print('world')
foo()
>>>'hello'
def add(a,b):
return a + b
add(3,4)
>>>7
If you want a dictionary (or any object) you created inside a function, just return it:
def my_function():
my_dict = {"Key1":Value1,
"Key2":Value2,
"Key3":Value3,
"Key4":Value4,}
return my_dict
d = my_function()
d['Key1']
>>> Value1
Those are the basics of function calling. There's even more. There are functions that return functions (also treated as decorators. You can even return multiple values (not really, you'll be just returning a tuple) and a lot a fun stuff :)
def two_values():
return 3,4
a,b = two_values()
print(a)
>>>3
print(b)
>>>4
Hope this helps!
The primary way to pass information between functions is with arguments and return values. Functions can't see each other's variables. You might think that after
def my_function():
my_dict = {"Key1":Value1,
"Key2":Value2,
"Key3":Value3,
"Key4":Value4,}
my_function()
my_dict would have a value that other functions would be able to see, but it turns out that's a really brittle way to design a language. Every time you call my_function, my_dict would lose its old value, even if you were still using it. Also, you'd have to know all the names used by every function in the system when picking the names to use when writing a new function, and the whole thing would rapidly become unmanageable. Python doesn't work that way; I can't think of any languages that do.
Instead, if a function needs to make information available to its caller, return the thing its caller needs to see:
def my_function():
return {"Key1":"Value1",
"Key2":"Value2",
"Key3":"Value3",
"Key4":"Value4",}
print(my_function()['Key1']) # Prints Value1
Note that a function ends when its execution hits a return statement (even if it's in the middle of a loop); you can't execute one return now, one return later, keep going, and return two things when you hit the end of the function. If you want to do that, keep a list of things you want to return and return the list when you're done.
You send information into and out of functions with arguments and return values, respectively. This function, for example:
def square(number):
"""Return the square of a number."""
return number * number
... recieves information through the number argument, and sends information back with the return ... statement. You can use it like this:
>>> x = square(7)
>>> print(x)
49
As you can see, we passed the value 7 to the function, and it returned the value 49 (which we stored in the variable x).
Now, lets say we have another function:
def halve(number):
"""Return half of a number."""
return number / 2.0
We can send information between two functions in a couple of different ways.
Use a temporary variable:
>>> tmp = square(6)
>>> halve(tmp)
18.0
use the first function directly as an argument to the second:
>>> halve(square(8))
32.0
Which of those you use will depend partly on personal taste, and partly on how complicated the thing you're trying to do is.
Even though they have the same name, the number variables inside square() and halve() are completely separate from each other, and they're invisible outside those functions:
>>> number
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'number' is not defined
So, it's actually impossible to "see" the variable my_dict in your example function. What you would normally do is something like this:
def my_function(my_dict):
# do something with my_dict
return my_dict
... and define my_dict outside the function.
(It's actually a little bit more complicated than that - dict objects are mutable (which just means they can change), so often you don't actually need to return them. However, for the time being it's probably best to get used to returning everything, just to be safe).
Consider this example:
def outer():
s_outer = "outer\n"
def inner():
s_inner = "inner\n"
do_something()
inner()
I want the code in do_something to be able to access the variables of the calling functions further up the call stack, in this case s_outer and s_inner. More generally, I want to call it from various other functions, but always execute it in their respective context and access their respective scopes (implement dynamic scoping).
I know that in Python 3.x, the nonlocal keyword allows access to s_outer from within inner. Unfortunately, that only helps with do_something if it's defined within inner. Otherwise, inner isn't a lexically enclosing scope (similarly, neither is outer, unless do_something is defined within outer).
I figured out how to inspect stack frames with the standard library inspect, and made a small accessor that I can call from within do_something() like this:
def reach(name):
for f in inspect.stack():
if name in f[0].f_locals:
return f[0].f_locals[name]
return None
and then
def do_something():
print( reach("s_outer"), reach("s_inner") )
works just fine.
Can reach be implemented more simply? How else can I solve the problem?
There is no and, in my opinion, should be no elegant way of implementing reach since that introduces a new non-standard indirection which is really hard to comprehend, debug, test and maintain. As the Python mantra (try import this) says:
Explicit is better than implicit.
So, just pass the arguments. You-from-the-future will be really grateful to you-from-today.
What I ended up doing was
scope = locals()
and make scope accessible from do_something. That way I don't have to reach, but I can still access the dictionary of local variables of the caller. This is quite similar to building a dictionary myself and passing it on.
We can get naughtier.
This is an answer to the "Is there a more elegant/shortened way to implement the reach() function?" half of the question.
We can give better syntax for the user: instead of reach("foo"), outer.foo.
This is nicer to type, and the language itself immediately tells you if you used a name that can't be a valid variable (attribute names and variable names have the same constraints).
We can raise an error, to properly distinguish "this doesn't exist" from "this was set to None".
If we actually want to smudge those cases together, we can getattr with the default parameter, or try-except AttributeError.
We can optimize: no need to pessimistically build a list big enough for all the frames at once.
In most cases we probably won't need to go all the way to the root of the call stack.
Just because we're inappropriately reaching up stack frames, violating one of the most important rules of programming to not have things far away invisibly effecting behavior, doesn't mean we can't be civilized.
If someone is trying to use this Serious API for Real Work on a Python without stack frame inspection support, we should helpfully let them know.
import inspect
class OuterScopeGetter(object):
def __getattribute__(self, name):
frame = inspect.currentframe()
if frame is None:
raise RuntimeError('cannot inspect stack frames')
sentinel = object()
frame = frame.f_back
while frame is not None:
value = frame.f_locals.get(name, sentinel)
if value is not sentinel:
return value
frame = frame.f_back
raise AttributeError(repr(name) + ' not found in any outer scope')
outer = OuterScopeGetter()
Excellent. Now we can just do:
>>> def f():
... return outer.x
...
>>> f()
Traceback (most recent call last):
...
AttributeError: 'x' not found in any outer scope
>>>
>>> x = 1
>>> f()
1
>>> x = 2
>>> f()
2
>>>
>>> def do_something():
... print(outer.y)
... print(outer.z)
...
>>> def g():
... y = 3
... def h():
... z = 4
... do_something()
... h()
...
>>> g()
3
4
Perversion elegantly achieved.
Is there a better way to solve this problem? (Other than wrapping the respective data into dicts and pass these dicts explicitly to do_something())
Passing the dicts explicitly is a better way.
What you're proposing sounds very unconventional. When code increases in size, you have to break down the code into a modular architecture, with clean APIs between modules. It also has to be something that is easy to comprehend, easy to explain, and easy to hand over to another programmer to modify/improve/debug it. What you're proposing sounds like it is not a clean API, unconventional, with a non-obvious data flow. I suspect it would probably make many programmers grumpy when they saw it. :)
Another option would be to make the functions members of a class, with the data being in the class instance. That could work well if your problem can be modelled as several functions operating on the data object.