How to log inputs into a function inside a package in python - python

While using profiler to look for the location where most of the execution time is spent in my python code, i found that it is from a package used in the code. So a function in a package is called 100s of times with different input arguments. In total this function takes the maximum time to execute.
So I want to implement some caching, so that if same parameters are passed, I can use the already extracted output from cache. So first I want to check if same parameters are being passed multiple times at all.
Is there any way I can enable some python level configuration, so that I can get arguments passed to the function on each iteration?
As I am not allowed to make any changes to this package Package1. So enabling something outside (like enabling debug mode) the pakage only may help.
Package1
module1
def function1()
for i in range(10000):
###Want to get arguments passed
###for each iteration of below function to a logfile
retvalue = function2(ar1,arg2,arg3)
My Code
package1.module1.function1()

You can use the cache from python to cache the values.
from functools import cache
#cache
def function2(*args):
return function1(*args) # function1 imported from module you can't change
Instead of logging the input args, you can use profiler to see if the runtime has improved. If it has, you can be sure that some calls are duplicate.

Related

Calling a multiple python scripts from python with predefined environment

Probably related to globals and locals in python exec(), Python 2 How to debug code injected by the exec block and How to get local variables updated, when using the `exec` call?
I am trying to develop a test framework for our desktop applications which uses click bot like functions.
My goal was to enable non-programmers to write small scripts which could work as a test script. So my idea is to structure the test scripts by files like:
tests-folder
| -> 01-first-test.py
| -> 02-second-test.py
| ... etc
| -> fixture.py
And then just execute these scripts in alphabetical order. However, I also wanted to have fixtures which would define functions, classes, variables and make them available to the different scripts without having the scripts to import that fixture explicitly. If that works, I could also have that approach for 2 or more directory levels.
I could get it working-ish with some hacking around, but I am not entirely convinced. I have a test_sequence.py which looks like this:
from pathlib import Path
from copy import deepcopy
from my_module.test import Failure
def run_test_sequence(test_dir: str):
error_occured = False
fixture = {
'some_pre_defined_variable': 'this is available in all scripts and fixtures',
'directory_name': test_dir,
}
# Check if fixture.py exists and load that first
fixture_file = Path(dir) / 'fixture.py'
if fixture_file.exists():
with open(fixture_file.absolute(), 'r') as code:
exec(code.read(), fixture, fixture)
# Go over all files in test sequence directory and execute them
for test_file in sorted(Path(test_dir).iterdir()):
if test_file.name == 'fixture.py':
continue
# Make a deepcopy, so scripts cannot influence one another
fixture_copy = deepcopy(fixture)
fixture_copy.update({
'some_other_variable': 'this is available in all scripts but not in fixture'
})
try:
with open(test_file.absolute(), 'r') as code:
exec(code.read(), fixture_locals, fixture_locals)
except Failure:
error_occured = True
return error_occured
This iterates over all files in the directory tests-folder and executes them in order (with fixture.py first). It also makes the local variables, functions and classes from fixture.py available to each test-script.
A test script could then just be arbitrary code that will be executed and if it raises my custom Failure exception, this will be noted as a failed test.
The whole sequence is started with a script that does
from my_module.test_sequence import run_test_sequence
if __name__ == '__main__':
exit(run_test_sequence('tests-folder')
This mostly works.
What it cannot do, and what leaves me unsatisfied with this approach:
I cannot debug the scripts itself. Since the code is loaded as string and then interpreted, breakpoints inside the test scripts are not recognized.
Calling fixture functions behaves weird. When I define a function in fixture.py like:
from my_hello_module import print_hello
def printer():
print_hello()
I will receive a message during execution that print_hello is undefined because the variables/modules/etc. in the scope surrounding printer are lost.
Stacktraces are useless. On failure it shows the stacktrace but of course only shows my line which says `exec(...)' and the insides of that function, but none of the code that has been loaded.
I am sure there are other drawbacks, that I have not found yet, but these are the most annoying ones.
I also tried to find a solution through __import__ but I couldn't get it to inject my custom locals or globals into the imported script.
Is there a solution, that I am too inexperienced to find or another builtin Python function that actually does, what I am trying to do? Or is there no way to achieve this and I should rather have each test-script import the fixture and file/directory names from the test-scripts itself. I want those scripts to have as few dependencies and pythony code as possible. Ideally they are just:
from my_module.test import *
click(x, y, LEFT)
write('admin')
press('tab')
write('password')
press('enter')
if text_on_screen('Login successful'):
succeed('Test successful')
else:
fail('Could not login')
Additional note: I think I had the debugger working when I still used execfile, but it is not available in python3 environments.

How can I call a python function from inside an AHK script?

How can I run a python function from an AHK script? If it's possible, how would I:
Pass arguments to the python function?
Return data from the python function back to my running AHK script?
The only relevant information I could find was this answer, however I could not manage to implement it into my own script.
My best attempt at implementing this is the following snippet of code:
e::; nothing, because RunWait it's the command recommended in the question that I linked, so that means that I have to do a RunWait every time I press e?
There is no native way to do this, as the only interaction AHK can do with Python is to run an entire script. However, you can modify the python script to accept arguments, then you can interact between the scripts (like the linked question).
The solution is as follows- similar to the question you linked, set up the python script so that it takes the function name and function parameters as arguments, then have it run the function with those arguments.
Similar to my answer on the linked question, you can use sys.argv to do this:
# Import the arguments from the sys module
from sys import argv
# If any arguments were passed (the first argument is the script name itself, so you must check for >1 instead of >0)
if len(argv) > 1:
# This next line is basically saying- If argv is longer than 2 (the script name and function name)
# Then set the args variable to everything other than the first 2 items in the argv array
# Otherwise, set the args variable to None
args = argv[2:] if len(argv) > 2 else None
# If arguments were passed, then run the function (second item in argv) with the arguments (the star turns the list into args, or in other words turns the list into the parameter input format)
# Otherwise, run the function without arguments
argv[1](*args) if args else argv[1]()
# If there is code here, it will also execute. If you want to only run the function, then call the exit() function to quit the script.
Then, from AHK, all you would need to do is run the RunWait or Run command (depending on whether you want to wait for the function to finish) to call the function.
RunWait, script.py "functionName" "firstArgument" "secondArgument"
The second part of your question is tricky. In the question you linked, there is a nice explanation on how to return integer values (TLDR: use sys.exit(integer_value)), however if you want to return all sorts of data, like strings, for example, then the problem becomes more confusing. The thing is that at this point, I think the best solution is to write the output to a file, then have AHK read the file after the Python script is done executing. However, if you're already going to go down the "write to a file, then read it" route, then you might as well have already done that from the start and used that method to pass the function name and arguments to the python script.

Python nameError help when importing from another file

I have a function in maya that imports in other functions and creates a shelf with buttons for specific functions. I have a function that has a scriptJob command that works fine. if I import that file in manually and not through the shelf button, but gives a NameError when using the shelf script to run it.
This is an example of the script
myShelf.py file:
import loopingFunction
loopingFunction.runThis()
loopingFunction.py file:
import maya.cmds as mc
def setSettings():
#have some settings set before running this
runThis()
def runThis():
print "yay this ran"
evalDeferred(mc.scriptJob(ro=True,ac=["'someMesh.outMesh',runThis"])
if I run this through the shelf function, I get a runThis nameError is not defined and if I try modifying the scriptJob command to loopingFunction.runThis, I get a nameError loopingFunction is not defined(not sure if using loopingFunction.runThis is even correct, to be honest)
Not sure how I can get around this without having to manually import in the function rather than through the shelf file.
Using string references for callback functions like this often leads to scope problems. (More on that, and why not to use strings, here)
If you pass the function directly to the callback as an object, instead of using a string, it should work correctly as along as you have actually imported the code.
in this case you want an evalDeferred -- do you actually need it? -- so it helps to add a little function around the actual code so that the scriptjob creation actually happens later -- otherwise it will get evaluated before the deferral is scheduled.
def runThis():
print "callback ran"
def do_scriptjob():
cmds.scriptJob(ro=True, ac=('someMesh.outMesh', runThis)
cmds.evalDeferred(do_scriptjob)
In both runThis and do_scriptjob I did not add the parens -- we are letting the evalDeferred and the scriptJob have the function objects to call when they are ready. Using the parens would pass the results of the functions which is not what you want here.
Incidentally it looks like you're trying to create a new copy of the scriptJob inside the scriptJob itself. It'd be better to just drop the runOnce flag and leave the scriptJob lying around -- if something in runThis actually affected the someMesh.outMesh attribute, your maya will go into an infinite loop. I did not change the structure in my example, but I would not recommend writing this kind of self-replicating code if you can possibly avoid it.
You have a problem of nested/maya scope variables
mc.scriptJob(ro=True,ac=["'someMesh.outMesh',runThis"]
This line is a maya command string that is evaluated in the main maya scope (like a global)
As your function have a namespace with the import : 'loopingFunction', you need to enforce it in the string command.
import loopingFunction
loopingFunction.runThis()
You should write
evalDeferred(mc.scriptJob(ro=True,ac=["'someMesh.outMesh',loopingFunction.runThis"])
If you want something more general, you can do :
def runThis(ns=''):
print "yay this ran"
if ns != '':
ns += '.'
evalDeferred(mc.scriptJob(ro=True,ac=["'someMesh.outMesh',{}runThis".format(ns)])
and then run in shelf :
import loopingFunction
loopingFunction.runThis('loopingFunction')
like this you can write any formof namepsaces :
import loopingFunction as loopF
loopF.runThis('loopF')

Removing cached files after a pytest run

I'm using a joblib.Memory to cache expensive computations when running tests with py.test. The code I'm using reduces to the following,
from joblib import Memory
memory = Memory(cachedir='/tmp/')
#memory.cache
def expensive_function(x):
return x**2 # some computationally expensive operation here
def test_other_function():
input_ds = expensive_function(x=10)
## run some tests with input_ds
which works fine. I'm aware this could be possibly more elegantly done with tmpdir_factory fixture but that's beside the point.
The issue I'm having is how to clean the cached files once all the tests run,
is it possible to share a global variable among all tests (which would contains e.g. a list of path to the cached objects) ?
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
is it possible to share a global variable among all tests (which would contains e.g. a list of path to the cached objects) ?
I wouldn't go down that path. Global mutable state is something best avoided, particularly in testing.
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
Yes, add an auto-used session-scoped fixture into your project-level conftest.py file:
# conftest.py
import pytest
#pytest.yield_fixture(autouse=True, scope='session')
def test_suite_cleanup_thing():
# setup
yield
# teardown - put your command here
The code after the yield will be run - once - at the end of the test suite, regardless of pass or fail.
is it possible to share a global variable among all tests (which would
contains e.g. a list of path to the cached objects) ?
There are actually a couple of ways to do that, each with pros and cons. I think this SO answer sums them up quite nice - https://stackoverflow.com/a/22793013/3023841 - but for example:
def pytest_namespace():
return {'my_global_variable': 0}
def test_namespace(self):
assert pytest.my_global_variable == 0
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
Yes, py.test has teardown functions available:
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method.
"""

Common variables in modules

I have three python files, let's call them master.py, slave1.py and slave2.py. Now slave1.py and slave2.py do not have any functions, but are required to do two different things using the same input (say the variable inp).
What I'd like to do is to call both the slave programs from master, and specify the one input variable inp in master, so I don't have to do it twice. Also so I can change the outputs of both slaves in one master program etc.
I'd like to keep the code of both slave1.py and slave2.py separate, so I can debug them individually if required, but when I try to do
#! /usr/bin/python
# This is master.py
import slave1
import slave2
inp = #some input
both slave1 and slave2 run before I can change the input. As I understand it, the way python imports modules is to execute them first. But is there some way to delay executing them so I can specify the common input? Or any other way to specify the input for both files from one place?
EDIT: slave1 and slave2 perform two different simulations being given a particular initial condition. Since the output of the two are the same, I'd like to display them in a similar manner, as well as have control over which files to write the simulated data to. So I figured importing both of them into a master file was the easiest way to do that.
Write the code in your slave modules as functions, import the functions, then call the functions from master with whatever input you need. If you need to have more stateful information, consider constructing an object.
You can do imports at any time:
inp = #some input
import slave1
import slave2
Note that this is generally considered bad design - you would be better off making the modules contain a function, rather than just having it happen when you import the module.
It looks like the architecture of your program is not really optimal. I think you have two files that execute immediately when you run them with python slave1.py. That is nice for scripting, but when you import them you run in trouble as you have experienced.
Best is to wrap your code in the slave files in a function (as suggested by #sr2222) and call these explicitly from master.py:
slave1.py/ slave2.py
def run(inp):
#your code
master.py
import slave1, slave2
inp = "foo"
slave1.run(inp)
slave2.run(inp)
If you still want to be able to run the slaves independently you could add something like this at the end:
if __name__ == "__main__":
inp = "foo"
run(inp)

Categories

Resources