I have a function in maya that imports in other functions and creates a shelf with buttons for specific functions. I have a function that has a scriptJob command that works fine. if I import that file in manually and not through the shelf button, but gives a NameError when using the shelf script to run it.
This is an example of the script
myShelf.py file:
import loopingFunction
loopingFunction.runThis()
loopingFunction.py file:
import maya.cmds as mc
def setSettings():
#have some settings set before running this
runThis()
def runThis():
print "yay this ran"
evalDeferred(mc.scriptJob(ro=True,ac=["'someMesh.outMesh',runThis"])
if I run this through the shelf function, I get a runThis nameError is not defined and if I try modifying the scriptJob command to loopingFunction.runThis, I get a nameError loopingFunction is not defined(not sure if using loopingFunction.runThis is even correct, to be honest)
Not sure how I can get around this without having to manually import in the function rather than through the shelf file.
Using string references for callback functions like this often leads to scope problems. (More on that, and why not to use strings, here)
If you pass the function directly to the callback as an object, instead of using a string, it should work correctly as along as you have actually imported the code.
in this case you want an evalDeferred -- do you actually need it? -- so it helps to add a little function around the actual code so that the scriptjob creation actually happens later -- otherwise it will get evaluated before the deferral is scheduled.
def runThis():
print "callback ran"
def do_scriptjob():
cmds.scriptJob(ro=True, ac=('someMesh.outMesh', runThis)
cmds.evalDeferred(do_scriptjob)
In both runThis and do_scriptjob I did not add the parens -- we are letting the evalDeferred and the scriptJob have the function objects to call when they are ready. Using the parens would pass the results of the functions which is not what you want here.
Incidentally it looks like you're trying to create a new copy of the scriptJob inside the scriptJob itself. It'd be better to just drop the runOnce flag and leave the scriptJob lying around -- if something in runThis actually affected the someMesh.outMesh attribute, your maya will go into an infinite loop. I did not change the structure in my example, but I would not recommend writing this kind of self-replicating code if you can possibly avoid it.
You have a problem of nested/maya scope variables
mc.scriptJob(ro=True,ac=["'someMesh.outMesh',runThis"]
This line is a maya command string that is evaluated in the main maya scope (like a global)
As your function have a namespace with the import : 'loopingFunction', you need to enforce it in the string command.
import loopingFunction
loopingFunction.runThis()
You should write
evalDeferred(mc.scriptJob(ro=True,ac=["'someMesh.outMesh',loopingFunction.runThis"])
If you want something more general, you can do :
def runThis(ns=''):
print "yay this ran"
if ns != '':
ns += '.'
evalDeferred(mc.scriptJob(ro=True,ac=["'someMesh.outMesh',{}runThis".format(ns)])
and then run in shelf :
import loopingFunction
loopingFunction.runThis('loopingFunction')
like this you can write any formof namepsaces :
import loopingFunction as loopF
loopF.runThis('loopF')
Related
Probably related to globals and locals in python exec(), Python 2 How to debug code injected by the exec block and How to get local variables updated, when using the `exec` call?
I am trying to develop a test framework for our desktop applications which uses click bot like functions.
My goal was to enable non-programmers to write small scripts which could work as a test script. So my idea is to structure the test scripts by files like:
tests-folder
| -> 01-first-test.py
| -> 02-second-test.py
| ... etc
| -> fixture.py
And then just execute these scripts in alphabetical order. However, I also wanted to have fixtures which would define functions, classes, variables and make them available to the different scripts without having the scripts to import that fixture explicitly. If that works, I could also have that approach for 2 or more directory levels.
I could get it working-ish with some hacking around, but I am not entirely convinced. I have a test_sequence.py which looks like this:
from pathlib import Path
from copy import deepcopy
from my_module.test import Failure
def run_test_sequence(test_dir: str):
error_occured = False
fixture = {
'some_pre_defined_variable': 'this is available in all scripts and fixtures',
'directory_name': test_dir,
}
# Check if fixture.py exists and load that first
fixture_file = Path(dir) / 'fixture.py'
if fixture_file.exists():
with open(fixture_file.absolute(), 'r') as code:
exec(code.read(), fixture, fixture)
# Go over all files in test sequence directory and execute them
for test_file in sorted(Path(test_dir).iterdir()):
if test_file.name == 'fixture.py':
continue
# Make a deepcopy, so scripts cannot influence one another
fixture_copy = deepcopy(fixture)
fixture_copy.update({
'some_other_variable': 'this is available in all scripts but not in fixture'
})
try:
with open(test_file.absolute(), 'r') as code:
exec(code.read(), fixture_locals, fixture_locals)
except Failure:
error_occured = True
return error_occured
This iterates over all files in the directory tests-folder and executes them in order (with fixture.py first). It also makes the local variables, functions and classes from fixture.py available to each test-script.
A test script could then just be arbitrary code that will be executed and if it raises my custom Failure exception, this will be noted as a failed test.
The whole sequence is started with a script that does
from my_module.test_sequence import run_test_sequence
if __name__ == '__main__':
exit(run_test_sequence('tests-folder')
This mostly works.
What it cannot do, and what leaves me unsatisfied with this approach:
I cannot debug the scripts itself. Since the code is loaded as string and then interpreted, breakpoints inside the test scripts are not recognized.
Calling fixture functions behaves weird. When I define a function in fixture.py like:
from my_hello_module import print_hello
def printer():
print_hello()
I will receive a message during execution that print_hello is undefined because the variables/modules/etc. in the scope surrounding printer are lost.
Stacktraces are useless. On failure it shows the stacktrace but of course only shows my line which says `exec(...)' and the insides of that function, but none of the code that has been loaded.
I am sure there are other drawbacks, that I have not found yet, but these are the most annoying ones.
I also tried to find a solution through __import__ but I couldn't get it to inject my custom locals or globals into the imported script.
Is there a solution, that I am too inexperienced to find or another builtin Python function that actually does, what I am trying to do? Or is there no way to achieve this and I should rather have each test-script import the fixture and file/directory names from the test-scripts itself. I want those scripts to have as few dependencies and pythony code as possible. Ideally they are just:
from my_module.test import *
click(x, y, LEFT)
write('admin')
press('tab')
write('password')
press('enter')
if text_on_screen('Login successful'):
succeed('Test successful')
else:
fail('Could not login')
Additional note: I think I had the debugger working when I still used execfile, but it is not available in python3 environments.
Edit: My first attempt at asking this might be a bit unfocused/poorly worded here's a better explanation of what I'm trying to do:
I'm trying to modify the default behavior of the print function for the entire environment python is running in without having to modify each file that's being run.
I'm attempting to decorate the print function (I know there are many ways to do this such as overriding it but that's not really the question I'm asking) so I can have it print out some debugging information and force it to always flush. I did that like so:
def modify_print(func):
# I made this so that output always gets flushed as it won't by default
# within the environment I'm using, I also wanted it to print out some
# debugging information, doesn't really matter much in the context of this
# question
def modified_print(*args,**kwargs):
return func(f"some debug prefix: ",flush=True,*args,**kwargs)
return modified_print
print = modify_print(print)
print("Hello world") # Prints "some debug prefix Hello World"
However what I'm trying to do is modify this behavior throughout my entire application. I know I can manually decorate/override/import the print function in each file however I'm wondering if there is some way I can globally configure my python environment to decorate this function everywhere. The only way I can think to do this would be to edit the python source code and build the modified version.
EDIT:
Here's the behavior I wanted implemented, thank you Match for your help.
It prints out the line number and filename everywhere you call a print function within your python environment. This means you don't have to import or override anything manually in all of your files.
https://gist.github.com/MichaelScript/444cbe5b74dce2c01a151d60b714ac3a
import site
import os
import pathlib
# Big thanks to Match on StackOverflow for helping me with this
# see https://stackoverflow.com/a/48713998/5614280
# This is some cool hackery to overwrite the default functionality of
# the builtin print function within your entire python environment
# to display the file name and the line number as well as always flush
# the output. It works by creating a custom user script and placing it
# within the user's sitepackages file and then overwriting the builtin.
# You can disable this behavior by running python with the '-s' flag.
# We could probably swap this out by reading the text from a python file
# which would make it easier to maintain larger modifications to builtins
# or a set of files to make this more portable or to modify the behavior
# of more builtins for debugging purposes.
customize_script = """
from inspect import getframeinfo,stack
def debug_printer(func):
# I made this so that output always gets flushed as it won't by default
# within the environment I'm running it in. Also it will print the
# file name and line number of where the print occurs
def debug_print(*args,**kwargs):
frame = getframeinfo(stack()[1][0])
return func(f"{frame.filename} : {frame.lineno} ", flush=True,*args,**kwargs)
return debug_print
__builtins__['print'] = debug_printer(print)
"""
# Creating the user site dir if it doesn't already exist and writing our
# custom behavior modifications
pathlib.Path(site.USER_SITE).mkdir(parents=True, exist_ok=True)
custom_file = os.path.join(site.USER_SITE,"usercustomize.py")
with open(custom_file,'w+') as f:
f.write(customize_script)
You can use usercustomize script from the site module to achieve something like this.
First, find out where your user site-packages directory is:
python3 -c "import site; print(site.USER_SITE)"
/home/foo/.local/lib/python3.6/site-packages
Next, in that directory, create a script called usercustomize.py - this script will now be run first whenever python is run.
One* way to replace print is to override the __builtins__ dict and replace it with a new method - something like:
from functools import partial
old_print = __builtins__['print']
__builtins__['print'] = partial(old_print, "Debug prefix: ", flush=True)
Drop this into the usercustomize.py script and you should see all python scripts from then on being overridden. You can temporarily disable calling this script by calling python with the -s flag.
*(Not sure if this is the correct way of doing this - there may be a better way - but the main point is that you can use usercustomize to deliver whatever method you choose).
There's no real reason to define a decorator here, because you are only intending to apply it to a single, predetermined function. Just define your modified print function directly, wrapping it around __builtins__.print to avoid recursion.
def print(*args, **kwargs):
__builtins.__print(f"some debug prefix: ", flush=True, *args, **kwargs)
print("Hello world") # Prints "some debug prefix Hello World"
You can use functools.partial to simplify this.
import functools
print = functools.partial(__builtins.__print, f"some debug prefix: ", flush=True)
So, as a joke I wrote a version of goto for python that I wanted to use as a library. The function I wrote for it is as follows.
def goto(loc):
exec(open(__file__).read().split("# "+str(loc))[1],globals())
quit()
This works for cases where it is in the file where goto is used, such as this:
def goto(loc):
exec(open(__file__).read().split("# "+str(loc))[1],globals())
quit()
# hi
print("test")
goto("hi")
However, if I import goto in another file, it doesn't work as
__file__
will always return the file with the function in it, not the one it is used in. Is there an equivalent of file that will allow me to import a file with the goto function in it and have it work?
Yes, you can if you inspect the call stack:
import inspect
def goto():
try:
frame = inspect.currentframe()
print(frame.f_back.f_globals['__file__'])
finally:
# break reference cycles
# https://docs.python.org/3.6/library/inspect.html#the-interpreter-stack
del frame
goto()
Note that the call stack is actually python implementation dependent -- So for some python interpreters, this might not actually work...
Of course, I've also shown you how to get the caller's globals (locals can be found via frame.f_back.f_locals for all of your execing needs).
Also note that it looks like you can implement a jump command by writing to the frame's f_lineno -- which might be a better way to implement your goto. I've never done this though so I can't really advise you on how to actually code it up :-)
Please excuse what I know is an incredibly basic question that I have nevertheless been unable to resolve on my own.
I'm trying to switch over my data analysis from Matlab to Python, and I'm struggling with something very basic: in Matlab, I write a function in the editor, and to use that function I simply call it from the command line, or within other functions. The function that I compose in the matlab editor is given a name at the function definition line, and it's generally best for the function name to match the .m file name to avoid confusion.
I don't understand how functions differ in Python, because I have not been successful translating the same approach there.
For instance, if I write a function in the Python editor (I'm using Python 2.7 and Spyder), simply saving the .py file and calling it by its name from the Python terminal does not work. I get a "function not defined" error. However, if I execute the function within Spyder's editor (using the "run file" button), not only does the code execute properly, from that point on the function is also call-able directly from the terminal.
So...what am I doing wrong? I fully appreciate that using Python isn't going to be identical to Matlab in every way, but it seems that what I'm trying to do isn't unreasonable. I simply want to be able to write functions and call them from the python command line, without having to run each and every one through the editor first. I'm sure my mistake here must be very simple, yet doing quite a lot of reading online hasn't led me to an answer.
Thanks for any information!
If you want to use functions defined in a particular file in Python you need to "import" that file first. This is similar to running the code in that file. Matlab doesn't require you to do this because it searches for files with a matching name and automagically reads in the code for you.
For example,
myFunction.py is a file containing
def myAdd(a, b):
return a + b
In order to access this function from the Python command line or another file I would type
from myFunction import myAdd
And then during this session I can type
myAdd(1, 2)
There are a couple of ways of using import, see here.
You need to a check for __main__ to your python script
def myFunction():
pass
if __name__ == "__main__":
myFunction()
then you can run your script from terminal like this
python myscript.py
Also if your function is in another file you need to import it
from myFunctions import myFunction
myFunction()
Python doesn't have MATLAB's "one function per file" limitation. You can have as many functions as you want in a given file, and all of them can be accessed from the command line or from other functions.
Python also doesn't follow MATLAB's practice of always automatically making every function it can find usable all the time, which tends to lead to function name collisions (two functions with the same name).
Instead, Python uses the concept of a "module". A module is just a file (your .py file). That file can have zero or more functions, zero or more variables, and zero or more classes. When you want to use something from that file, you just import it.
So say you have a file 'mystuff.py':
X = 1
Y = 2
def myfunc1(a, b):
do_something
def myfunc2(c, d):
do_something
And you want to use it, you can just type import mystuff. You can then access any of the variables or functions in mystuff. To call myfunc2, you can just do mystuff.myfunc2(z, w).
What basically happens is that when you type import mystuff, it just executes the code in the file, and makes all the variables that result available from mystuff.<varname>, where <varname> is the name of the variable. Unlike in MATLAB, Python functions are treated like any other variable, so they can be accessed just like any other variable. The same is true with classes.
There are other ways to import, too, such as from mystuff import myfunc.
You run python programs by running them with
python program.py
I'm writing a game engine using pygame and box2d, and in the character builder, I want to be able to write the code that will be executed on keydown events.
My plan was to have a text editor in the character builder that let you write code similar to:
if key == K_a:
## Move left
pass
elif key == K_d:
## Move right
pass
I will retrieve the contents of the text editor as a string, and I want the code to be run in a method in this method of Character:
def keydown(self, key):
## Run code from text editor
What's the best way to do that?
You can use the eval(string) method to do this.
Definition
eval(code, globals=None, locals=None)
The code is just standard Python code - this means that it still needs to be properly indented.
The globals can have a custom __builtins__ defined, which could be useful for security purposes.
Example
eval("print('Hello')")
Would print hello to the console. You can also specify local and global variables for the code to use:
eval("print('Hello, %s'%name)", {}, {'name':'person-b'})
Security Concerns
Be careful, though. Any user input will be executed. Consider:
eval("import os;os.system('sudo rm -rf /')")
There are a number of ways around that. The easiest is to do something like:
eval("import os;...", {'os':None})
Which will throw an exception, rather than erasing your hard drive. While your program is desktop, this could be a problem if people redistributed scripts, which I imagine is intended.
Strange Example
Here's an example of using eval rather strangely:
def hello() : print('Hello')
def world() : print('world')
CURRENT_MOOD = 'happy'
eval(get_code(), {'contrivedExample':__main__}, {'hi':hello}.update(locals()))
What this does on the eval line is:
Gives the current module another name (it becomes contrivedExample to the script). The consumer can call contrivedExample.hello() now.)
It defines hi as pointing to hello
It combined that dictionary with the list of current globals in the executing module.
FAIL
It turns out (thanks commenters!) that you actually need to use the exec statement. Big oops. The revised examples are as follows:
exec Definition
(This looks familiar!)
Exec is a statement:
exec "code" [in scope]
Where scope is a dictionary of both local and global variables. If this is not specified, it executes in the current scope.
The code is just standard Python code - this means that it still needs to be properly indented.
exec Example
exec "print('hello')"
Would print hello to the console. You can also specify local and global variables for the code to use:
eval "print('hello, '+name)" in {'name':'person-b'}
exec Security Concerns
Be careful, though. Any user input will be executed. Consider:
exec "import os;os.system('sudo rm -rf /')"
Print Statement
As also noted by commenters, print is a statement in all versions of Python prior to 3.0. In 2.6, the behaviour can be changed by typing from __future__ import print_statement. Otherwise, use:
print "hello"
Instead of :
print("hello")
As others have pointed out, you can load the text into a string and use exec "codestring". If contained in a file already, using execfile will avoid having to load it.
One performance note: You should avoid execing the code multiple times, as parsing and compiling the python source is a slow process. ie. don't have:
def keydown(self, key):
exec user_code
You can improve this a little by compiling the source into a code object (with compile() and exec that, or better, by constructing a function that you keep around, and only build once. Either require the user to write "def my_handler(args...)", or prepend it yourself, and do something like:
user_source = "def user_func(args):\n" + '\n'.join(" "+line for line in user_source.splitlines())
d={}
exec user_source in d
user_func = d['user_func']
Then later:
if key == K_a:
user_func(args)
You can use eval()
eval or exec. You should definitely read Python library reference before programming.