Hi I was wondering how to implement this in python. Lets say for example you have a function with two parameters and both print out to console
def myFunc(varA, varB):
print 'varA=', varA
print 'varB=', varB
I have seen libraries (pymel being the one that comes to mind) where it allows you to specify the parameter you are parsing data too by name in no specific order. for example
myFunc(varB=12, varA = 'Tom')
I am not sure what I am missing as this doesn't seem to work when I try to declare my own functions inside or outside the maya environment.
Any clues would be wonderful, thank you in advanced.
That's normal Python behavior. If you're seeing errors then you're goofing up something else (e.g. missing a required parameter, trying to pass positional arguments by name, etc.).
>>> def func(foo, bar):
... print foo, bar
...
>>> func(bar='quux', foo=42)
42 quux
Related
I am trying to introduce some automation to a script I'm writing, and I'm having some trouble with calling a function that has parameters from another module. Here's the scenario:
I have two modules: test.py and Strategies.py. I have code that generates a list of all the functions in Strategies.py. From that list, I am using getattr to execute each function.
What I'm having problems with is that some of my functions have parameters. I am getting the following error with a function that has an 'x' argument:
TypeError: buy_test_function() missing 1 required positional argument: 'x'
To make this as clear as possible, here's the relevant code:
call_method = strategy_names[0][y]
call_method = getattr(Strategies, call_method)()
I know the first line above is working fine. I also know that it's the empty parentheses at the end of the second line that's causing the problem. The magic I need is finding a way to dynamically read each function's required arguments and execute the function with the necessary arguments in the parentheses.
I've tried to use inspect.signature(), but it keeps telling me the object is not callable.
I have to believe Python has an elegant solution to this, but I've had little luck on Google. Any assistance is greatly appreciated.
Thank you!
Assuming the functions in Strategies are not class methods, and you've annotated the function types in the signatures, you can construct default instances of the class types you specified for the arguments and pass them in as params:
from inspect import signature
call_method = strategy_names[0][y]
call_method = getattr(Strategies, call_method)
sig = signature(call_method)
call_method = getattr(Strategies, call_method)(*[param.annotation() for param in sig.parameters.values()])
See for reference:
>>> def test(x:int):
... return x*2
>>> sig = signature(test)
>>> sig.parameters['x'].annotation
<class 'int'>
>>> sig.parameters['x'].annotation()
0
>>> test(*[param.annotation() for param in sig.parameters.values()])
0
I will also note that if you can define the values you want to use in your call methods ahead of time for a given method and your function names in Strategies are unique, you can prebuild a dictionary that maps the function name to the args you want to use:
args = {'test':[1]}
test(*args[test.__name__])
Do I have to formally define a function before I can use it as an element of a dictionary?
def my_func():
print 'my_func'
d = {
'function': my_func
}
I would rather define the function inline. I just tried to type out what I want to do, but the whitespace policies of python syntax make it very hard to define an inline func within a dict. Is there any way to do this?
The answer seems to be that there is no way to declare a function inline a dictionary definition in python. Thanks to everyone who took the time to contribute.
Do you really need a dictionary, or just getitem access?
If the latter, then use a class:
>>> class Dispatch(object):
... def funcA(self, *args):
... print('funcA%r' % (args,))
... def funcB(self, *args):
... print('funcB%r' % (args,))
... def __getitem__(self, name):
... return getattr(self, name)
...
>>> d = Dispatch()
>>>
>>> d['funcA'](1, 2, 3)
funcA(1, 2, 3)
You could use a decorator:
func_dict = {}
def register(func):
func_dict[func.__name__] = func
return func
#register
def a_func():
pass
#register
def b_func():
pass
The func_dict will end up mapping using the entire name of the function:
>>> func_dict
{'a_func': <function a_func at 0x000001F6117BC950>, 'b_func': <function b_func at 0x000001F6117BC8C8>}
You can modify the key used by register as desired. The trick is that we use the __name__ attribute of the function to get the appropriate string.
Consider using lambdas, but note that lambdas can only consist of one expression and cannot contain statements (see http://docs.python.org/reference/expressions.html#lambda).
e.g.
d = { 'func': lambda x: x + 1 }
# call d['func'](2) will return 3
Also, note that in Python 2, print is not a function. So you have to do either:
from __future__ import print_function
d = {
'function': print
}
or use sys.stdout.write instead
d = {
'function': sys.stdout.write
}
Some functions can be easily 'inlined' anonymously with lambda expressions, e.g.:
>>> d={'function': lambda x : x**2}
>>> d['function'](5)
25
But for anything semi-complex (or using statements) you probably just should define them beforehand.
There is no good reason to want to write this using a dictionary in Python. It's strange and is not a common way to namespace functions.
The the Python philosophies that apply here are:
There should be one-- and preferably only one --obvious way to do it.
Combined with
Readability counts.
Doing it this way also makes things hard to understand and read for the typical Python user.
The good things the dictionary does in this case is map strings to functions and namespace them within a dictionary, but this functionality is already provided by both modules and classes and it's much easier to understand by those familiar with Python.
Examples:
Module method:
#cool.py
def cool():
print 'cool'
Now use the module like you would be using your dict:
import cool
#cool.__dict__['cool']()
#update - to the more correct idiom vars
vars(cool)['cool']()
Class method:
class Cool():
def cool():
print 'cool'
#Cool.__dict__['cool']()
#update - to the more correct idiom vars
vars(Cool)['cool']()
Edit after comment below:
argparse seems like a good fit for this problem, so you don't have to reinvent the wheel. If you do decide to implement it completely yourself though argparse source should give you some good direction. Anyways the sections below seem to apply to this use case:
15.4.4.5. Beyond sys.argv
Sometimes it may be useful to have an ArgumentParser parse arguments
other than those of sys.argv. This can be accomplished by passing a
list of strings to parse_args(). This is useful for testing at the
interactive prompt:
15.4.5.1. Sub-commands¶
ArgumentParser.add_subparsers()
Many programs split up their functionality into a number of sub-commands, for example, the svn program can invoke sub-commands
like svn checkout, svn update, and svn commit.
15.4.4.6. The Namespace object
It may also be useful to have an ArgumentParser assign attributes to
an already existing object, rather than a new Namespace object. This
can be achieved by specifying the namespace= keyword argument:
Update, here's an example using argparse
strategizer = argparse.ArgumentParser()
strat_subs = strategizer.add_subparsers()
math = strat_subs.add_parser('math')
math_subs = math.add_subparsers()
math_max = math_subs.add_parser('max')
math_sum = math_subs.add_parser('sum')
math_max.set_defaults(strategy=max)
math_sum.set_defaults(strategy=sum)
strategizer.parse_args('math max'.split())
Out[46]: Namespace(strategy=<built-in function max>)
strategizer.parse_args('math sum'.split())
Out[47]: Namespace(strategy=<built-in function sum>)
I would like to note the reasons I would recommend argparse
Mainly the requirement to use strings that represent options and sub options to map to functions.
It's dead simple (after getting past the feature filled argparse module).
Uses a Python Standard Library Module. This let's others familiar with Python grok what your doing without getting into implementation details, and is very well documented for those who aren't.
Many extra features could be taken advantage of out of the box (not the best reason!).
Using argparse and Strategy Pattern together
For the plain and simple implementation of the Strategy Pattern, this has already been answered very well.
How to write Strategy Pattern in Python differently than example in Wikipedia?
#continuing from the above example
class MathStudent():
def do_math(self, numbers):
return self.strategy(numbers)
maximus = strategizer.parse_args('math max'.split(),
namespace=MathStudent())
sumera = strategizer.parse_args('math sum'.split(),
namespace=MathStudent())
maximus.do_math([1, 2, 3])
Out[71]: 3
sumera.do_math([1, 2, 3])
Out[72]: 6
The point of inlining functions is to blur the distinction between dictionaries and class instances. In javascript, for example, this techinque makes it very pleasant to write control classes that have little reusability. Also, and very helpfully the API then conforms to the well-known dictionary protocols, being self explanatory (pun intended).
You can do this in python - it just doesn't look like a dictionary! In fact, you can use the class keyword in ANY scope (i.e. a class def in a function, or a class def inside of a class def), and it's children can be the dictonary you are looking for; just inspect the attributes of a definition as if it was a javascript dictionary.
Example as if it was real:
somedict = {
"foo":5,
"one_function":your method here,
"two_function":your method here,
}
Is actually accomplished as
class somedict:
foo = 5
#classmethod
def one_method(self):
print self.foo
self.foo *= 2;
#classmethod
def two_method(self):
print self.foo
So that you can then say:
somedict.foo #(prints 5)
somedict.one_method() #(prints 5)
somedict.two_method() #(prints 10)
And in this way, you get the same logical groupings as you would with your "inlining".
This question is similar to others asked on here, but after reading the answers I'm not grasping it and would appreciate further guidance.
While sketching new code I find myself adding a lot of statements like:
print('var=')
pprint(var)
It became tedious always writing that, so I thought I could make it into a function. Since I want to print the variable name on the preceding line, I tried:
def dbp(var):
eval('print(\'{0}=\')'.format(var))
eval('pprint({0})'.format(var))
so then I do do things like:
foo = 'bar'
dbp('foo')
which prints
foo=
'bar'
This is all great, but when I go to use it in a function things get messed up. For example, doing
def f():
a = ['123']
dbp('a')
f()
raises a NameError (NameError: name 'a' is not defined).
My expectation was that dbp() would have read access to anything in f()'s scope, but clearly it doesn't. Can someone explain why?
Also, better ways of printing a variable's name followed by its formatted contents are also appreciated.
You really should look at other ways to doing this.
The logging module is a really good habit to get into, and you can turn off and on debug output.
Python 3.6 has f'' strings so you would simplify this to:
pprint(f'var=\n{var}`)`
However, here's an example (not recommended) using locals():
In []:
def dbp(var, l):
print('{}='.format(var))
pprint(l[var])
def f():
a = 1
dbp('a', locals())
f()
Out[]:
a=
1
first of all, id like to say that eval is a high security risk for whoever is going to be running that code.
However, if you absolutely must, you can do this.
def dbp(var):
env = {'var': var}
# Adding global variables to the enviroment
env.update(globals())
eval("print('{0}=')".format(var))
eval('pprint(var)', env)
def f():
a = ['123']
dbp('a')
you can then do
>>> f()
a=
'a'
I want to have a function in a different module, that when called, has access to all variables that its caller has access to, and functions just as if its body had been pasted into the caller rather than having its own context, basically like a C Macro instead of a normal function. I know I can pass locals() into the function and then it can access the local variables as a dict, but I want to be able to access them normally (eg x.y, not x["y"] and I want all names the caller has access to not just the locals, as well as things that were 'imported' into the caller's file but not into the module that contains the function.
Is this possible to pull off?
Edit 2 Here's the simplest possible example I can come up with of what I'm really trying to do:
def getObj(expression)
ofs = expression.rfind(".")
obj = eval(expression[:ofs])
print "The part of the expression Left of the period is of type ", type(obj),
Problem is that 'expression' requires the imports and local variables of the caller in order to eval without error.In reality theres a lot more than just an eval, so I'm trying to avoid the solution of just passing locals() in and through to the eval() since that won't fix my general case problem.
And another, even uglier way to do it -- please don't do this, even if it's possible --
import sys
def insp():
l = sys._getframe(1).f_locals
expression = l["expression"]
ofs = expression.rfind(".")
expofs = expression[:ofs]
obj = eval(expofs, globals(), l)
print "The part of the expression %r Left of the period (%r) is of type %r" % (expression, expofs, type(obj)),
def foo():
derp = 5
expression = "derp.durr"
insp()
foo()
outputs
The part of the expression 'derp.durr' Left of the period ('derp') is of type (type 'int')
I don't presume this is the answer that you wanted to hear, but trying to access local variables from a caller module's scope is not a good idea. If you normally program in PHP or C, you might be used to this sort of thing?
If you still want to do this, you might consider creating a class and passing an instance of that class in place of locals():
#other_module.py
def some_func(lcls):
print(lcls.x)
Then,
>>> import other_module
>>>
>>>
>>> x = 'Hello World'
>>>
>>> class MyLocals(object):
... def __init__(self, lcls):
... self.lcls = lcls
... def __getattr__(self, name):
... return self.lcls[name]
...
>>> # Call your function with an instance of this instead.
>>> other_module.some_func(MyLocals(locals()))
'Hello World'
Give it a whirl.
Is this possible to pull off?
Yes (sort of, in a very roundabout way) which I would strongly advise against it in general (more on that later).
Consider:
myfile.py
def func_in_caller():
print "in caller"
import otherfile
globals()["imported_func"] = otherfile.remote_func
imported_func(123, globals())
otherfile.py
def remote_func(x1, extra):
for k,v in extra.iteritems():
globals()[k] = v
print x1
func_in_caller()
This yields (as expected):
123
in caller
What we're doing here is trickery: we just copy every item into another namespace in order to make this work. This can (and will) break very easily and/or lead to hard to find bugs.
There's almost certainly a better way of solving your problem / structuring your code (we need more information in general on what you're trying to achieve).
From The Zen of Python:
2) Explicit is better than implicit.
In other words, pass in the parameter and don't try to get really fancy just because you think it would be easier for you. Writing code is not just about you.
Let's say we have
def Foo(Bar=0,Song=0):
print(Bar)
print(Song)
And I want to assign any one of the two parameters in the function with the variable sing and SongVal:
Sing = Song
SongVal = 2
So that it can be run like:
Foo(Sing=SongVal)
Where Sing would assign the Song parameter to the SongVal which is 2.
The result should be printed like so:
0
2
So should I rewrite my function or is it possible to do it the way I want to? (With the code above you get an error saying Foo has no parameter Sing. Which I understand why, any way to overcome this without rewriting the function too much?
Thanks in advance!
What you're looking for is the **kwargs way of passing arbitrary keyword arguments:
kwargs = {Sing: SongVal}
foo(**kwargs)
See section 4.7 of the tutorial at www.python.org for more examples.