Probably related to globals and locals in python exec(), Python 2 How to debug code injected by the exec block and How to get local variables updated, when using the `exec` call?
I am trying to develop a test framework for our desktop applications which uses click bot like functions.
My goal was to enable non-programmers to write small scripts which could work as a test script. So my idea is to structure the test scripts by files like:
tests-folder
| -> 01-first-test.py
| -> 02-second-test.py
| ... etc
| -> fixture.py
And then just execute these scripts in alphabetical order. However, I also wanted to have fixtures which would define functions, classes, variables and make them available to the different scripts without having the scripts to import that fixture explicitly. If that works, I could also have that approach for 2 or more directory levels.
I could get it working-ish with some hacking around, but I am not entirely convinced. I have a test_sequence.py which looks like this:
from pathlib import Path
from copy import deepcopy
from my_module.test import Failure
def run_test_sequence(test_dir: str):
error_occured = False
fixture = {
'some_pre_defined_variable': 'this is available in all scripts and fixtures',
'directory_name': test_dir,
}
# Check if fixture.py exists and load that first
fixture_file = Path(dir) / 'fixture.py'
if fixture_file.exists():
with open(fixture_file.absolute(), 'r') as code:
exec(code.read(), fixture, fixture)
# Go over all files in test sequence directory and execute them
for test_file in sorted(Path(test_dir).iterdir()):
if test_file.name == 'fixture.py':
continue
# Make a deepcopy, so scripts cannot influence one another
fixture_copy = deepcopy(fixture)
fixture_copy.update({
'some_other_variable': 'this is available in all scripts but not in fixture'
})
try:
with open(test_file.absolute(), 'r') as code:
exec(code.read(), fixture_locals, fixture_locals)
except Failure:
error_occured = True
return error_occured
This iterates over all files in the directory tests-folder and executes them in order (with fixture.py first). It also makes the local variables, functions and classes from fixture.py available to each test-script.
A test script could then just be arbitrary code that will be executed and if it raises my custom Failure exception, this will be noted as a failed test.
The whole sequence is started with a script that does
from my_module.test_sequence import run_test_sequence
if __name__ == '__main__':
exit(run_test_sequence('tests-folder')
This mostly works.
What it cannot do, and what leaves me unsatisfied with this approach:
I cannot debug the scripts itself. Since the code is loaded as string and then interpreted, breakpoints inside the test scripts are not recognized.
Calling fixture functions behaves weird. When I define a function in fixture.py like:
from my_hello_module import print_hello
def printer():
print_hello()
I will receive a message during execution that print_hello is undefined because the variables/modules/etc. in the scope surrounding printer are lost.
Stacktraces are useless. On failure it shows the stacktrace but of course only shows my line which says `exec(...)' and the insides of that function, but none of the code that has been loaded.
I am sure there are other drawbacks, that I have not found yet, but these are the most annoying ones.
I also tried to find a solution through __import__ but I couldn't get it to inject my custom locals or globals into the imported script.
Is there a solution, that I am too inexperienced to find or another builtin Python function that actually does, what I am trying to do? Or is there no way to achieve this and I should rather have each test-script import the fixture and file/directory names from the test-scripts itself. I want those scripts to have as few dependencies and pythony code as possible. Ideally they are just:
from my_module.test import *
click(x, y, LEFT)
write('admin')
press('tab')
write('password')
press('enter')
if text_on_screen('Login successful'):
succeed('Test successful')
else:
fail('Could not login')
Additional note: I think I had the debugger working when I still used execfile, but it is not available in python3 environments.
Related
I have a function in maya that imports in other functions and creates a shelf with buttons for specific functions. I have a function that has a scriptJob command that works fine. if I import that file in manually and not through the shelf button, but gives a NameError when using the shelf script to run it.
This is an example of the script
myShelf.py file:
import loopingFunction
loopingFunction.runThis()
loopingFunction.py file:
import maya.cmds as mc
def setSettings():
#have some settings set before running this
runThis()
def runThis():
print "yay this ran"
evalDeferred(mc.scriptJob(ro=True,ac=["'someMesh.outMesh',runThis"])
if I run this through the shelf function, I get a runThis nameError is not defined and if I try modifying the scriptJob command to loopingFunction.runThis, I get a nameError loopingFunction is not defined(not sure if using loopingFunction.runThis is even correct, to be honest)
Not sure how I can get around this without having to manually import in the function rather than through the shelf file.
Using string references for callback functions like this often leads to scope problems. (More on that, and why not to use strings, here)
If you pass the function directly to the callback as an object, instead of using a string, it should work correctly as along as you have actually imported the code.
in this case you want an evalDeferred -- do you actually need it? -- so it helps to add a little function around the actual code so that the scriptjob creation actually happens later -- otherwise it will get evaluated before the deferral is scheduled.
def runThis():
print "callback ran"
def do_scriptjob():
cmds.scriptJob(ro=True, ac=('someMesh.outMesh', runThis)
cmds.evalDeferred(do_scriptjob)
In both runThis and do_scriptjob I did not add the parens -- we are letting the evalDeferred and the scriptJob have the function objects to call when they are ready. Using the parens would pass the results of the functions which is not what you want here.
Incidentally it looks like you're trying to create a new copy of the scriptJob inside the scriptJob itself. It'd be better to just drop the runOnce flag and leave the scriptJob lying around -- if something in runThis actually affected the someMesh.outMesh attribute, your maya will go into an infinite loop. I did not change the structure in my example, but I would not recommend writing this kind of self-replicating code if you can possibly avoid it.
You have a problem of nested/maya scope variables
mc.scriptJob(ro=True,ac=["'someMesh.outMesh',runThis"]
This line is a maya command string that is evaluated in the main maya scope (like a global)
As your function have a namespace with the import : 'loopingFunction', you need to enforce it in the string command.
import loopingFunction
loopingFunction.runThis()
You should write
evalDeferred(mc.scriptJob(ro=True,ac=["'someMesh.outMesh',loopingFunction.runThis"])
If you want something more general, you can do :
def runThis(ns=''):
print "yay this ran"
if ns != '':
ns += '.'
evalDeferred(mc.scriptJob(ro=True,ac=["'someMesh.outMesh',{}runThis".format(ns)])
and then run in shelf :
import loopingFunction
loopingFunction.runThis('loopingFunction')
like this you can write any formof namepsaces :
import loopingFunction as loopF
loopF.runThis('loopF')
I have a program that interacts with and changes block devices (/dev/sda and such) on linux. I'm using various external commands (mostly commands from the fdisk and GNU fdisk packages) to control the devices. I have made a class that serves as the interface for most of the basic actions with block devices (for information like: What size is it? Where is it mounted? etc.)
Here is one such method querying the size of a partition:
def get_drive_size(device):
"""Returns the maximum size of the drive, in sectors.
:device the device identifier (/dev/sda and such)"""
query_proc = subprocess.Popen(["blockdev", "--getsz", device], stdout=subprocess.PIPE)
#blockdev returns the number of 512B blocks in a drive
output, error = query_proc.communicate()
exit_code = query_proc.returncode
if exit_code != 0:
raise Exception("Non-zero exit code", str(error, "utf-8")) #I have custom exceptions, this is slight pseudo-code
return int(output) #should always be valid
So this method accepts a block device path, and returns an integer. The tests will run as root, since this entire program will end up having to run as root anyway.
Should I try and test code such as these methods? If so, how? I could try and create and mount image files for each test, but this seems like a lot of overhead, and is probably error-prone itself. It expects block devices, so I cannot operate directly on image files in the file system.
I could try mocking, as some answers suggest, but this feels inadequate. It seems like I start to test the implementation of the method, if I mock the Popen object, rather than the output. Is this a correct assessment of proper unit-testing methodology in this case?
I am using python3 for this project, and I have not yet chosen a unit-testing framework. In the absence of other reasons, I will probably just use the default unittest framework included in Python.
You should look into the mock module (I think it's part of the unittest module now in Python 3).
It enables you to run tests without the need to depened in any external resources while giving you control over how the mocks interact with your code.
I would start from the docs in Voidspace
Here's an example:
import unittest2 as unittest
import mock
class GetDriveSizeTestSuite(unittest.TestCase):
#mock.patch('path/to/original/file.subprocess.Popen')
def test_a_scenario_with_mock_subprocess(self, mock_popen):
mock_popen.return_value.communicate.return_value = ('Expected_value', '')
mock_popen.return_value.returncode = '0'
self.assertEqual('expected_value', get_drive_size('some device'))
Please excuse what I know is an incredibly basic question that I have nevertheless been unable to resolve on my own.
I'm trying to switch over my data analysis from Matlab to Python, and I'm struggling with something very basic: in Matlab, I write a function in the editor, and to use that function I simply call it from the command line, or within other functions. The function that I compose in the matlab editor is given a name at the function definition line, and it's generally best for the function name to match the .m file name to avoid confusion.
I don't understand how functions differ in Python, because I have not been successful translating the same approach there.
For instance, if I write a function in the Python editor (I'm using Python 2.7 and Spyder), simply saving the .py file and calling it by its name from the Python terminal does not work. I get a "function not defined" error. However, if I execute the function within Spyder's editor (using the "run file" button), not only does the code execute properly, from that point on the function is also call-able directly from the terminal.
So...what am I doing wrong? I fully appreciate that using Python isn't going to be identical to Matlab in every way, but it seems that what I'm trying to do isn't unreasonable. I simply want to be able to write functions and call them from the python command line, without having to run each and every one through the editor first. I'm sure my mistake here must be very simple, yet doing quite a lot of reading online hasn't led me to an answer.
Thanks for any information!
If you want to use functions defined in a particular file in Python you need to "import" that file first. This is similar to running the code in that file. Matlab doesn't require you to do this because it searches for files with a matching name and automagically reads in the code for you.
For example,
myFunction.py is a file containing
def myAdd(a, b):
return a + b
In order to access this function from the Python command line or another file I would type
from myFunction import myAdd
And then during this session I can type
myAdd(1, 2)
There are a couple of ways of using import, see here.
You need to a check for __main__ to your python script
def myFunction():
pass
if __name__ == "__main__":
myFunction()
then you can run your script from terminal like this
python myscript.py
Also if your function is in another file you need to import it
from myFunctions import myFunction
myFunction()
Python doesn't have MATLAB's "one function per file" limitation. You can have as many functions as you want in a given file, and all of them can be accessed from the command line or from other functions.
Python also doesn't follow MATLAB's practice of always automatically making every function it can find usable all the time, which tends to lead to function name collisions (two functions with the same name).
Instead, Python uses the concept of a "module". A module is just a file (your .py file). That file can have zero or more functions, zero or more variables, and zero or more classes. When you want to use something from that file, you just import it.
So say you have a file 'mystuff.py':
X = 1
Y = 2
def myfunc1(a, b):
do_something
def myfunc2(c, d):
do_something
And you want to use it, you can just type import mystuff. You can then access any of the variables or functions in mystuff. To call myfunc2, you can just do mystuff.myfunc2(z, w).
What basically happens is that when you type import mystuff, it just executes the code in the file, and makes all the variables that result available from mystuff.<varname>, where <varname> is the name of the variable. Unlike in MATLAB, Python functions are treated like any other variable, so they can be accessed just like any other variable. The same is true with classes.
There are other ways to import, too, such as from mystuff import myfunc.
You run python programs by running them with
python program.py
In python ,There is a reload method to reload an imported module , but is there a method to reload the currently running script without restarting it, this would be very helpful in debugging a script and changing the code live while the script is running. In visual basic I remember a similar functionality was called "apply code changes", but I need a similar functionality as a function call like "refresh()" which will apply the code changes instantly.
This would work smoothly when an independent function in the script is modified and we need to immediately apply the code change without restarting the script.
Inshort will:
reload(self)
work?
reload(self) will not work, because reload() works on modules, not live instances of classes. What you want is some logic external to your main application, which checks whether it has to be reloaded. You have to think about what is needed to re-create your application state upon reload.
Some hints in this direction:
Guido van Rossum wrote once this: xreload.py; it does a bit more than reload() You would need a loop, which checks for changes every x seconds and applies this.
Also have a look at livecoding which basically does this. EDIT: I mistook this project for something else (which I didn't find now), sorry.
perhaps this SO question will help you
Perhaps you mean something like this:
import pdb
import importlib
from os.path import basename
def function():
print("hello world")
if __name__ == "__main__":
# import this module, but not as __main__ so everything other than this
# if statement is executed
mainmodname = basename(__file__)[:-3]
module = importlib.import_module(mainmodname)
while True:
# reload the module to check for changes
importlib.reload(module)
# update the globals of __main__ with the any new or changed
# functions or classes in the reloaded module
globals().update(vars(module))
function()
pdb.set_trace()
Run the code, then change the contents of function and enter c at the prompt to run the next iteration of the loop.
test.py
class Test(object):
def getTest(self):
return 'test1'
testReload.py
from test import Test
t = Test()
print t.getTest()
# change return value (test.py)
import importlib
module = importlib.import_module(Test.__module__)
reload(module)
from test import Test
t = Test()
print t.getTest()
Intro
reload is for imported modules. Documentation for reload advises against reloading __main__.
Reloading sys, __main__, builtins and other key modules is not recommended.
To achieve similar behavior on your script you will need to re-execute the script. This will - for normal scripts - also reset the the global state. I propose a solution.
NOTE
The solution I propose is very powerful and should only be used for code you trust. Automatically executing code from unknown sources can lead to a world of pain. Python is not a good environment for soapboxing unsafe code.
Solution
To programmatically execute a python script from another script you only need the built-in functions open, compile and exec.
Here is an example function that will re-execute the script it is in:
def refresh():
with open(__file__) as fo:
source_code = fo.read()
byte_code = compile(source_code, __file__, "exec")
exec(byte_code)
The above will in normal circumstances also re-set any global variables you might have. If you wish to keep these variables you should check weather or not those variables have already been set. This can be done with a try-except block covering NameError exceptions. But that can be tedious so I propose using a flag variable.
Here is an example using the flag variable:
in_main = __name__ == "__main__"
first_run = "flag" not in globals()
if in_main and first_run:
flag = True
None of these answers did the job properly for me, so I put together something very messy and very non-pythonic to do the job myself. Even after running it for several weeks, I am finding small issues and fixing them. One issue will be if your PWD/CWD changes.
Warning this is very ugly code. Perhaps someone will make it pretty, but it does work.
Not only does it create a refresh() function that properly reloads your script in a manner such that any Exceptions will properly display, but it creates refresh_<scriptname> functions for previously loaded scripts just-in-case you need to reload those.
Next I would probably add a require portion, so that scripts can reload other scripts -- but I'm not trying to make node.js here.
First, the "one-liner" that you need to insert in any script you want to refresh.
with open(os.path.dirname(__file__) + os.sep + 'refresh.py', 'r') as f: \
exec(compile(f.read().replace('__BASE__', \
os.path.basename(__file__).replace('.py', '')).replace('__FILE__', \
__file__), __file__, 'exec'))
And second, the actual refresh function which should be saved to refresh.py in the same directory. (See, room for improvement already).
def refresh(filepath = __file__, _globals = None, _locals = None):
print("Reading {}...".format(filepath))
if _globals is None:
_globals = globals()
_globals.update({
"__file__": filepath,
"__name__": "__main__",
})
with open(filepath, 'rb') as file:
exec(compile(file.read(), filepath, 'exec'), _globals, _locals)
def refresh___BASE__():
refresh("__FILE__")
Tested with Python 2.7 and 3.
Take a look at reload. You just need to install the plugin an use reload ./myscript.py, voilĂ
If you are running in an interactive session you could use ipython autoreload
autoreload reloads modules automatically before entering the execution of code typed at the IPython prompt.
Of course this also works on module level, so you would do something like:
>>>import myscript
>>>myscript.main()
*do some changes in myscript.py*
>>>myscript.main() #is now changed
Sorry for the beginner question, but I can't figure out cProfile (I'm really new to Python)
I can run it via my terminal with:
python -m cProfile myscript.py
But I need to run it on a webserver, so I'd like to put the command within the script it will look at. How would I do this? I've seen stuff using terms like __init__ and __main__ but I dont really understand what those are.
I know this is simple, I'm just still trying to learn everything and I know there's someone who will know this.
Thanks in advance! I appreciate it.
I think you've been seeing ideas like this:
if __name__ == "__main__":
# do something if this script is invoked
# as python scriptname. Otherwise, gets ignored.
What happens is when you call python on a script, that file has an attribute __name__ set to "__main__" if it is the file being directly called by the python executable. Otherwise, (if it is not directly called) it is imported.
Now, you can use this trick on your scripts if you need to, for example, assuming you have:
def somescriptfunc():
# does something
pass
if __name__ == "__main__":
# do something if this script is invoked
# as python scriptname. Otherwise, gets ignored.
import cProfile
cProfile.run('somescriptfunc()')
This changes your script. When imported, its member functions, classes etc can be used as normal. When run from the command-line, it profiles itself.
Is this what you're looking for?
From the comments I've gathered more is perhaps needed, so here goes:
If you're running a script from CGI changes are it is of the form:
# do some stuff to extract the parameters
# do something with the parameters
# return the response.
When I say abstract out, you can do this:
def do_something_with_parameters(param1, param2):
pass
if __name__ = "__main__":
import cProfile
cProfile.run('do_something_with_parameters(param1=\'sometestvalue\')')
Put that file on your python path. When run itself, it will profile the function you want profiling.
Now, for your CGI script, create a script that does:
import {insert name of script from above here}
# do something to determine parameter values
# do something with them *via the function*:
do_something_with_parameters(param1=..., param2=...)
# return something
So your cgi script just becomes a little wrapper for your function (which it is anyway) and your function is now self-testing.
You can then profile the function using made up values on your desktop, away from the production server.
There are probably neater ways to achieve this, but it would work.