calling separate python program in unit test - python

i am new to python and unit test.following is the main unittest program that calls other python programs which acts a test cases
import unittest
from test import test_support
class MyTestCase1(unittest.TestCase):
def test_feature_one(self):
print "testing feature one"
execfile("/root/test/add.py")
def test_main():
test_support.run_unittest(MyTestCase1);
if __name__ == '__main__':
test_main()
add.py is basic program that adds two no and displays it.
#!/usr/bin/env python
import sys
def disp(r):
print r
def add():
res = 3+5;
disp(res)
add()
but there is problem when i call a function from another function. i hit the following error when i try to run unit test(first program).But if i run add.py as single program outside the unit test suit it works fine. kindly need help in understanding this scenario
======================================================================
ERROR: test_feature_one (__main__.MyTestCase1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "first.py", line 17, in test_feature_one
execfile("/root/test/add.py")
File "/root/test/add.py", line 12, in <module>
add()
File "/root/test/add.py", line 10, in add
disp(res)
NameError: global name 'disp' is not defined
----------------------------------------------------------------------

From docs on execfile (https://docs.python.org/2/library/functions.html#execfile) :
"Remember that at module level, globals and locals are the same dictionary.
...
If the locals dictionary is omitted it defaults to the globals dictionary. If both dictionaries are omitted, the expression is executed in the environment where execfile() is called."
I'm not very familiar to how exactly globals and locals works here, so I won't be able to give deep explanation, but from what I understood:
the key here is that you're running execfile from function. If you run it from module level, it will work:
if __name__ == '__main__':
execfile('blah')
But if you run it from function:
def f():
execfile('blah')
if __name__ == '__main__':
f()
it will fail. Because of magic with globals and locals.
How to fix your example: add dictionary to arguments of execfile, and it will work (remember that line from docs: "If the locals dictionary is omitted it defaults to the globals dictionary.").
But instead of using execfile, I'd recommend to import add from add.py and just call it in test. (That will also require to move call to add func in add.py to if __name__ == '__main__':, to not run add on imports.
Here some info on how globals and locals work http://www.diveintopython.net/html_processing/locals_and_globals.html .

Related

How to share global variable across the files in python?

I am trying to find a way to share a variable between multiple python scripts. I have the following code:
b.py
my_variable = []
a.py
from b import my_variable # import the value
def run():
global x
x = my_variable.append("and another string")
print(my_variable)
if __name__ == '__main__':
run()
c.py
import a
print(a.x)
a.py runs just fine without giving any error. However, when I run the c.py file, it gives off the following error:
Traceback (most recent call last):
File "E:/MOmarFarooq/Programming/Projects/Python Projects/Variables across files/c.py", line 2, in
<module>
print(a.x)
AttributeError: module 'a' has no attribute 'x'
What I want the code to do is, print the new value of my_variable after it has been changed in a.py . Is there any way I can do that?
the error occurred because you never called the run function from a.py. The if __name__=='__main__': statement is only satisfied if you are running a.py as a program, not importing it as a module.
So a.py should be
from b import my_variable # import the value
def run():
global x
x = my_variable.append("and another string")
print(my_variable)
run()
Note that x will be set to None because the append function does not return anything. It just appends a value to a list.
You need to call the run() function in your c.py file.
Here's how the code should be:
import a
a.run()
Well, module 'a' does have no attribute 'x'. Module 'a' has a function that creates a variable 'x', but as long as the method isn't called, the attribute isn't there.
You could change file c.py to:
import a
a.run()
print(a.x)
Another solution would be to make sure that the run() function is always called when importing module 'a'. This is currently not the case, because of the line if __name__ == '__main__':.
If you don't want to run the code but only want to make sure the variable exists, just define it in the module. Before the definition of your run() method, just add x = None (or use any other initial value you prefer).
Note, however, that there other problems with your code and that using globals in this way is a really bad programming pattern, which will likely lead to other problems later on. I wonder what you want to achieve. It probably would be a better solution if you could pass x as argument to the run() function instead of referring to a global variable. But that's outside the scope of this question and difficult to answer without more information.

Functions defined in dynamically-loaded scripts cannot refer to each other

I'm trying to load functions from a script dynamically when I'm inside an ipython interactive shell. For example, suppose I have a python script like this:
# script.py
import IPython as ip
def Reload():
execfile('routines.py', {}, globals())
if __name__ == "__main__":
ip.embed()
Suppose the file routines.py is like this:
# routines.py
def f():
print 'help me please.'
def g():
f()
Now if I run the script script.py, I'll be entering the interactive shell. If I type the following, my call to g() works:
execfile('routines.py')
g()
However, if I type the following, the call to g() fails:
Reload()
g()
I will get an error message saying that "global name f is not defined.", although I can still see that f and g are in the output when I type globals() in the interactive shell.
What's the difference of these two?
UPDATE:
The following works, however it's not a preferred solution so I would like to have a better solution for the problem above.
If I change script.py to:
# script.py
import IPython as ip
def Reload():
execfile('routines.py')
if __name__ == "__main__":
ip.embed()
And change routines.py to:
# routines.py
global f
global g
def f():
print 'help me please.'
def g():
f()
Then if I call Reload() in the interactive shell and then call g(), it works. However this is not a preferred approach because I have to declare global names.
UPDATE 2:
It seems that the problem is independent of ipython. With the first version of routines.py if I start the python shell, and type the following by hand:
def Reload():
execfile('routines.py', {}, globals())
g()
The call to g() also fails. But the following works:
execfile('routines.py')
g()
As #Bakuriu said, importing is much preferred. Ignoring that, what you want is
def Reload():
execfile('routines.py', globals())
Lets clarify your example to show why it does not work.
# Setup the namespace to use for execfile
global_dict = {}
local_dict = globals()
execfile('routines.py', global_dict, local_dict)
g() # raises NameError
Since you are passing two different dicts to execfile, the file is executed as if it were in a class definition (from the docs). This means your functions are defined in local_dict but not global_dict.
When you then call g(), it is executed using globals global_dict and a fresh empty local dict. Since neither global_dict or the new locals doesn't contain f we get a name error. By instead calling execfile('routines.py', globals()), we are using global_dict = globals() and local_dict = globals() so f is defined in g's globals.
EDIT:
You noticed that local_dict has both f and g, but global_dict does not in the second example. Defining any variable without explicitly marking it global will always make a local variable, this applies to modules too! It just so happens that normally a module has locals() == globals(); however, we broke this standard by using different local and global dicts. This is what I meant when I said "the file is executed as if it were in a class definition".

python - get list of all functions in current module. inspecting current module does not work?

I have following code
fset = [ obj for name,obj in inspect.getmembers(sys.modules[__name__]) if inspect.isfunction(obj) ]
def func(num):
pass
if __name__ == "__main__":
print(fset)
prints
[]
however this
def func(num):
pass
fset = [ obj for name,obj in inspect.getmembers(sys.modules[__name__]) if inspect.isfunction(obj) ]
if __name__ == "__main__":
print(fset)
prints
[<function func at 0x7f35c29383b0>]
so how can fset be list of all functions in current module where fset is defined at the top of all functions ?
EDIT 1: What I am trying to do is
def testall(arg):
return any(f(arg) for f in testfunctions)
def test1(arg):
#code here
# may call testall but wont call anyother test*
def test2(arg):
#code here
# may call testall but wont call anyother test*
More test function may be added in the future. So thats the reason of fset/testfunctions
EDIT 1: What I am trying to do is
def testall(arg):
return any(f(arg) for f in testfunctions)
def test1(arg):
#code here
# may call testall but wont call anyother test*
This works just fine:
def testall(arg):
testfunctions = [obj for name,obj in inspect.getmembers(sys.modules[__name__])
if (inspect.isfunction(obj) and
name.startwith('test') and name != 'testall')]
return any(f(arg) for f in testfunctions)
def test1(arg):
#code here
# may call testall but wont call anyother test*
In this case, testfunctions isn't evaluated until testall is called, so there's no problem here—by that time, all top-level module code (including the test1 definition) will have been evaluated, so testfunctions will get all of the top-level functions. (I'm assuming here that testall or test1 is being called from an if __name__ == '__main__' block at the bottom of the module, or another script is doing import tests; tests.test1(10), or something similar.)
In fact, even if you explicitly named test1 and test2, there would be no problem:
def testall(arg):
testfunctions = ('test1',)
return any(f(arg) for f in testfunctions)
def test1(arg):
#code here
# may call testall but wont call anyother test*
Again, test1 is already defined by the time you call testall, so everything is fine.
If you want to understand why this works, you have to understand the stages here.
When you import a module, or run a top-level script, the first stage is compilation (unless there's already a cached .pyc file). The compiler doesn't need to know what value a name has, just whether it's local or global (or a closure cell), and it can already tell that sys and inspect and test1 are globals (because you don't assign to them in testall or in an enclosing scope).
Next, the interpreter executes the compiled bytecode for the top-level module, in order. This includes executing the function definitions. So, testall becomes a function, then test1 becomes a function, then test2 becomes a function. (A function is really just the appropriate compiled code, with some extra stuff attached, like the global namespace it was defined in.)
Later, when you call the testall function, the interpreter executes the function. This is when the list comprehension (in the first version) or the global name lookup (in the second) happens. Since the function definitions for test1 and test2 have already been evaluated and bound to global names in the module, everything works.
What if you instead later call test1, which calls testall? No problem. The interpreter executes test1, which has a call to testall, which is obviously already defined, so the interpreter calls that, and the rest is the same as in the previous paragraph.
So, what if you put a call to testall or test1 in between the test1 and test2 definitions? In that case, test2 wouldn't have been defined yet, so it would not appear in the list (first version), or would raise a NameError (second version). But as long as you don't do that, there's no problem. And there's no good reason to do so.
If you're worried about the horrible performance cost of computing testfunctions every time you call testall… Well, first, that's a silly worry; how many times are you going to call it? Are your functions really so fast that the time to call and filter getmembers even shows up on the radar? But if it really is a worry, just cache the value in your favorite of the usual ways—mutable default, privat global, function attribute, …:
def testall(arg, _functions_cache=[]):
if not _functions_cache:
_functions_cache.extend([…])
It can't be. Function definitions are executed in Python. The functions don't exist until their definition is executed. Your fset variable can't be defined until after the functions are defined.
To exclude any imported functions this works:
import sys
import inspect
[obj for name,obj in inspect.getmembers(sys.modules[__name__])
if (inspect.isfunction(obj) and
name.startswith('test') and
obj.__module__ == __name__)]

Declaration functions in python after call

$ cat declare_funcs.py
#!/usr/bin/python3
def declared_after():
print("good declared after")
declared_after()
$ python3 declare_funcs.py
good declared after
Change call place:
$ cat declare_funcs.py
#!/usr/bin/python3
declared_after()
def declared_after():
print("good declared after")
$ python3 declare_funcs.py
Traceback (most recent call last):
File "declare_funcs.py", line 4, in <module>
declared_after()
NameError: name 'declared_after' is not defined
Is there way to declare only header of function like it was in C/C++?
For example:
#!/usr/bin/python3
def declared_after() # declaration about defined function
declared_after()
def declared_after():
print("good declared after")
I found this Declare function at end of file in Python
Any way there appear another function in the beginning like wrapper, and this wrapper must be called after declaration of wrapped function, this is not an exit. Is there more elegant true-python way?
You can't forward-declare functions in Python. It doesn't make a lot of sense to do so, because Python is dynamically typed. You could do something silly like this, and what would expect it to do?
foo = 3
foo()
def foo():
print "bar"
Obviously, you are trying to __call__ the int object for 3. It's absolutely silly.
You ask if you can forward-declare like in C/C++. Well, you typically don't run C through an interpreter. However, although Python is compiled to bytecode, the python3 program is an interpreter.
Forward declaration in a compiled language makes sense because you are simply establishing a symbol and its type, and the compiler can run through the code several times to make sense of it. When you use an interpreter, however, you typically can't have that luxury, because you would have to run through the rest of the code to find the meaning of that forward declaration, and run through it again after having done that.
You can, of course, do something like this:
foo = lambda: None
foo()
def foo():
print "bar"
But you instantiated foo nonetheless. Everything has to point to an actual, existing object in Python.
This doesn't apply to def or class statements, though. These create a function or class object, but they don't execute the code inside yet. So, you have time to instantiate things inside them before their code runs.
def foo():
print bar()
# calling foo() won't work yet because you haven't defined bar()
def bar():
return "bar"
# now it will work
The difference was that you simply created function objects with the variable names foo and bar representing them respectively. You can now refer to these objects by those variable names.
With regard to the way that Python is typically interpreted (in CPython) you should make sure that you execute no code in your modules unless they are being run as the main program or unless you want them to do something when being imported (a rare, but valid case). You should do the following:
Put code meant to be executed into function and class definitions.
Unless the code only makes sense to be executed in the main program, put it in another module.
Use if __name__ == "__main__": to create a block of code which will only execute if the program is the main program.
In fact, you should do the third in all of your modules. You can simply write this at the bottom of every file which you don't want to be run as a main program:
if __name__ = "__main__":
pass
This prevents anything from happening if the module is imported.
Python doesn't work that way. The def is executed in sequence, top-to-bottom, with the remainder of the file's contents. You cannot call something before it is defined as a callable (e.g. a function), and even if you had a stand-in callable, it would not contain the code you are looking for.
This, of course, doesn't mean the code isn't compiled before execution begins—in fact, it is. But it is when the def is executed that declared_after is actually assigned the code within the def block, and not before.
Any tricks you pull to sort-of achieve your desired effect must have the effect of delaying the call to declared_after() until after it is defined, for example, by enclosing it in another def block that is itself called later.
One thing you can do is enclose everything in a main function:
def main():
declared_after()
def declared_after():
print("good declared after")
main()
However, the point still stands that the function must be defined prior to calling. This only works because main is called AFTER declared_after is defined.
As zigg wrote, Python files are executed in order they are written from top to bottom, so even if you could “declare” the variable before, the actual function body would only get there after the function was called.
The usual way to solve this is to just have a main function where all your standard execution stuff happens:
def main ():
# do stuff
declared_after();
def declared_after():
pass
main()
You can then also combine this with the __name__ == '__main__' idiom to make the function only execute when you are executing the module directly:
def main ():
# do stuff
declared_after();
def declared_after():
pass
if __name__ == '__main__':
main()

calling execfile() in custom namespace executes code in '__builtin__' namespace

When I call execfile without passing the globals or locals arguments it creates objects in the current namespace, but if I call execfile and specify a dict for globals (and/or locals), it creates objects in the __builtin__ namespace.
Take the following example:
# exec.py
def myfunc():
print 'myfunc created in %s namespace' % __name__
exec.py is execfile'd from main.py as follows.
# main.py
print 'execfile in global namespace:'
execfile('exec.py')
myfunc()
print
print 'execfile in custom namespace:'
d = {}
execfile('exec.py', d)
d['myfunc']()
when I run main.py from the commandline I get the following output.
execfile in global namespace:
myfunc created in __main__ namespace
execfile in custom namespace:
myfunc created in __builtin__ namespace
Why is it being run in __builtin__ namespace in the second case?
Furthermore, if I then try to run myfunc from __builtins__, I get an AttributeError. (This is what I would hope happens, but then why is __name__ set to __builtin__?)
>>> __builtins__.myfunc()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'module' object has no attribute 'myfunc'
Can anyone explain this behaviour?
Thanks
First off, __name__ is not a namespace - its a reference to the name of the module it belongs to, ie: somemod.py -> somemod.__name__ == 'somemod'
The exception to this being if you run a module as an executable from the commandline, then the __name__ is '__main__'.
in your example there is a lucky coincidence that your module being run as main is also named main.
Execfile executes the contents of the module WITHOUT importing it as a module. As such, the __name__ doesn't get set, because its not a module - its just an executed sequence of code.
The execfile function is similar to the exec statement. If you look at the documentation for exec you'll see the following paragraph that explains the behavior.
As a side effect, an implementation may insert additional keys into the dictionaries given besides those corresponding to variable names set by the executed code. For example, the current implementation may add a reference to the dictionary of the built-in module __builtin__ under the key __builtins__ (!).
Edit: I now see that my answer applies to one possible interpretation of the question title. My answer does not apply to the actual question asked.
As an aside, I prefer using __import__() over execfile:
module = __import__(module_name)
value = module.__dict__[function_name](arguments)
This also works well when adding to the PYTHONPATH, so that modules in other directories can be imported:
sys.path.insert(position, directory)

Categories

Resources