I have python3 class function defined like below:
class class1:
def funcOne(self, reqvar1, reqVar2, optVar1=default1, optVar2=default2, optVar3="server.domain", optVar4="defaultUser", optVar5="<default_Flags>"):
It gets called (I want to call it like this rather) in the main program like:
argsIn=argparser.parse_args()
classInst=class1()
classInst.funcOne(5, 12, argsIn.inVal1, argsIn.inVal2, argsIn.inVal3, argsIn.inVal4, argsIn.inVal5)
args.inVal[1-5] are optional on the command line when running. If they don't get supplied I want the class function to use the defaults, if they do get supplied then they would use the supplied values.
Currently if they are not supplied on the command line, inVal[1-5] are passed as 'None' which overwrite the actual default values.
The class function is maintained separately and they manage the defaults. Putting them into my script (for example in the argparser options) is not appropriate.
Is there a way to easily work with this situation that doesn't resort to:
if args.inVal1 and not args.inVal2...
if not args.inVal1 and args.inVal2 and not args.inVal3...
as the number of combinations gets large.
It seems like it should be simple, but I am not connecting something here.
Thank you for the help.
If you create a dictionary that contains the optional variable names, you can pass that dictionary to the function call. I only commented out the argparser for testing.
class class1:
def funcOne(self, reqvar1, reqVar2, optVar1='default1', optVar2='default2', optVar3="server.domain", optVar4="defaultUser", optVar5="<default_Flags>"):
print(optVar1)
print(optVar2)
print(optVar3)
print(optVar4)
print(optVar5)
#argsIn=argparser.parse_args()
optionalArgs = {'optVar1': 'TestingVar1', #args.inVal1,
'optVar2': None, #args.inVal2,
'optVar3': 'TestingVar3', #args.inVal3,
'optVar4': None, #args.inVal4,
'optVar5': None} #args.inVal5}
optionalArgsClean = {k:v for k, v in optionalArgs.items() if v is not None}
classInst=class1()
classInst.funcOne(5, 12, **optionalArgsClean)
Running the above code produces:
TestingVar1
default2
TestingVar3
defaultUser
<default_Flags>
Related
I have a Python (3) structure like following:
main_script.py
util_script.py
AccessClass.py
The main script is calling a function in util with following signature:
def migrate_entity(project, name, access=AccessClass.AccessClass()):
The call itself in the main script is:
migrate_entity(project_from_file, name_from_args, access=access_object)
All objects do have values when the call is done.
However, As soon as the main script is executed the AccessClass in the function parameters defaults is initialized, even though it is never used. For example this main script __init__ will create the default class in the function signature:
if __name__ == "__main__":
argparser = argparse.ArgumentParser(description='Migrate support data')
argparser.add_argument('--name', dest='p_name', type=str, help='The entity name to migrate')
load_dotenv()
fileConfig('logging.ini')
# Just for the sake of it
quit()
# The rest of the code...
# ...and then
migrate_entity(project_from_file, name_from_args, access=access_object)
Even with the quit() added the AccessClass is created. And if I run the script with ./main_script.py -h the AccessClass in the function signature is created. And even though the only call to the function really is with an access object I can see that the call is made to the AccessClass.__init__.
If I replace the default with None and instead check the parameter inside the function and then create it, everything is working as expected, i.e. the AccessClass is not created if not needed.
Can someone please enlighten me why this is happening and how defaults are expected to work?
Are parameter defaults always created in advance in Python?
Basically the mutable objects are initialized the moment you declare the function, not when you invoke it. That's why it's widely discouraged to use mutable types as defaults. You can use None as you mentioned and inside the body do the check if something is None and then initialize it properly.
def foo_bad(x = []): pass # This is bad
foo_bad() # the list initialized during declaration used
foo_bad([1,2]) # provided list used
foo_bad() # again the list initialized during declaration used
def foo_good(x = None):
if x is None:
x=[]
... # further logic
AccessClass is being created because you've set it as a default parameter, so it it's in the scope of the file itself and will be initialised when the file is first imported. This is also why it's not recommended to use lists or dicts as default parameters.
This is a much safer way of defining a default value if nothing is provided:
def migrate_entity(project, name, access=None):
if access is None:
access = AccessClass.AccessClass()
You could also use type hinting to demonstrate what type access should be:
def migrate_entity(project, name, access: Optional[AccessClass.AccessClass] = None): ...
I recently had to use a function conditionally dispatching tasks to other functions, with a lot of mandatory and optional named arguments (e.g. manipulating connection strings, spark connectors configs and so on), and it occurred to me that It would have been really much "cleaner" (or "pythonesque") to have a syntax allowing me to pass every arguments from a function to another similar to this :
def sisterFunction(**kwargs) : # Doing things with a bunch of mandatory and optional args
<do various things/>
def motherFunction(a,b,**kwargs) :
<do various things/>
sisterFunction(**allArgs)
where allArgs would be a dictionary containing keys a,b, and everything in kwargs. This sounds like something python would be inclined to allow and ease but I can't seem to find something similar to a "super kwargs" implemented. Is there a straightforward way to do this ? Is there an obvious good reason it's not a thing ?
def sisterFunction(**kwargs):
pass
def motherFunction(a, b, **kwargs):
sisterFunction(a=a, b=b, **kwargs)
kwargs in sisterFunction will contain a and b keys with corresponding values.
UPDATE
If you don't want to pass long list of function parameters via a=a, there is some workaround to get allArgs:
def motherFunction(a, b, **kwargs):
allArgs = locals().copy()
allArgs.update(allArgs.pop('kwargs', {}))
sisterFunction(**allArgs)
I would probably go with just using kwargs
def sisterFunction(**kwargs):
pass
def motherFunction(**kwargs):
# use the values directly from 'kwargs'
print(kwargs['a'])
# or assign them to local variables for this function
b = kwargs['b']
sisterFunction(**kwargs)
This will probably be the option with the least code in your function signatures (the definitions of all the parameters to the function).
A KeyError will be raised if some parameters were not passed to the function and the function tries to use them.
def accept(**kwargs):
pass
If I defined accept and I expect it be called by passing a param which is dict. Are the asterisks necessary for all dict params?
What if I do things like:
def accept(dict):
pass
dict = {...}
accept(dict)
Specifically speaking, I would like to implement a update method for a class, which keeps a dict as a container. Just like the dict.update method, it takes a dict as a param and modify the content of the container. In this specific case should I use the kwargs or not?
** in function parameter collects all keyword arguments as a dictionary.
>>> def accept(**kwargs): # isinstance(kwargs, dict) == True
... pass
...
Call using keyword arguments:
>>> accept(a=1, b=2)
Call using ** operator:
>>> d = {'a': 1, 'b': 2}
>>> accept(**d)
>>> accept(d)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: accept() takes exactly 0 arguments (1 given)
See Python tutorial - Keyword argument and Unpacking Argument Lists.
BTW, don't use dict as variable name. It shadows builtin function/type dict.
See f below. The function f has two parameters, a positional one called name and a keyword argument message. They are local variable in the frame of the function call.
When you do f("John", **{"foo": "123", "message": "Hello World"}), the function f will unpack the dictionary into local variable by its key/value pair. In the end you have three local varaibles: name, foo=123, and message=Hello World.
The purpose of **kwargs, double asterisks is for uknown keyword arguments.
Contrast this:
def f(name, message=None):
if message:
return name + message
return name
Here I am telling user if you ever want to call f, you can pass a keyword argument message. This is the only kwarg I will ever accept and expect to receive if there is such one. If you try f("John", foo="Hello world") you will get unexpected keyword argument.
**kwargs is useful if you don't know ahead of time what keyword arguments you want to receive (very common for dispatching to lower-level functions/methods), then you use it.
def f(name, message=None, **kwargs):
if message:
return name + message
return name
In the second example, you can do f("John", **{"foo": "Hello Foo"}) while omitting message. You can also do f("John", **{"foo": "Hello Foo", "message": "Hello Message"}).
Can I ignore it?
As you see yes you can ignore it. Here f("John", **{"foo": "Hello Foo", "message": "Hello Message"}) I still only use message and ignore everything else.
Don't use **kwargs unless you need to be careless about the inputs.
What if my input is a dictionary?
If your function simply takes the dictionary and modifies the dictionary, NOT using individaul key, then just pass the dictionary. There is no reason to make a dictionary item into variables.
But here are two main usages of **kwargs.
Supposed I have a class and I want to create attributes on the fly. I can use setattr to set class attributes from input.
class Foo(object):
def __init__(**kwargs):
for key, value in kwargs.items():
setattr(self, key, value)
If I do Foo(**{"a": 1, "b": 2}) I will get Foo.a and Foo.b at the end.
This is particularly useful when you have to deal with legacy code. However, there is a big security concern. Imagine you own a MongoDB instance and this is a container for writing into a database. Imagine this dict is a request form object from user. The user can shovel ANYTHING and you simply save it in the database like that? That's a security hole. Make sure you validate (use a loop).
The second common usage of kwargs is that you don't know things ahead of times which I have covered (it's actually sort of the first common usage anyway).
If you want to pass a dictionary as input to a function, you can simply do it like this
def my_function1(input_dict):
print input_dict
d = {"Month": 1, "Year": 2}
my_function1(d) # {'Month': 1, 'Year': 2}
This is straight forward. Lets see the **kwargs method. kwargs stands for keyword arguments. So, you need to actually pass the parameters as key-value pairs, like this
def my_function2(**kwargs):
print kwargs
my_function2(Month = 1, Year = 2)
But if you have a dictionary and if you want to pass that as a parameter to my_function2, it can be done with unpacking, like this
my_function2(**d)
I have been doing a lot of searching, and I don't think I've really found what I have been looking for. I will try my best to explain what I am trying to do, and hopefully there is a simple solution, and I'll be glad to have learned something new.
This is ultimately what I am trying to accomplish: Using nosetests, decorate some test cases using the attribute selector plugin, then execute test cases that match a criteria by using the -a switch during commandline invocation. The attribute values for the tests that are executed are then stored in an external location. The command line call I'm using is like below:
nosetests \testpath\ -a attribute='someValue'
I have also created a customized nosetest plugin, which stores the test cases' attributse, and writes them to an external location. The idea is that I can select a batch of tests, and by storing the attributes of these tests, I can do filtering on these results later for reporting purposes. I am accessing the method attributes in my plugin by overriding the "wantMethod" method with the code similar to the following:
def set_attribs(self, method, attribute):
if hasattr(method, attribute):
if not self.method_attributes.has_key(method.__name__):
self.method_attributes[method.__name__] = {}
self.method_attributes[method.__name__][attribute] = getattr(method, attribute)
def wantMethod(self, method):
self.set_attribs(method, "attribute1")
self.set_attribs(method, "attribute2")
pass
I have this working for pretty much all the tests, except for one case, where the test is uing the "yield" keyword. What is happening is that the methods that are generated are being executed fine, but then the method attributes are empty for each of the generated functions.
Below is the example of what I am trying to achieve. The test below retreives a list of values, and for each of those values, yields the results from another function:
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield (lambda x: f(), key)
def _do_test(self, input1, input2):
# Some code
From what I have read, and think I understand, when yield is called, it would create a new callable function which then gets executed. I have been trying to figure out how to retain the attribute values from my sample_test_generator method, but I have not been successful. I thought I could create a partial method, and then add the attribute to the method, but no luck. The tests execute without errors at all, it just seems that from my plugin's perspective, the method attributes aren't present, so they don't get recorded.
I realize this a pretty involved question, but I wanted to make sure that the context for what I am trying to achieve is clear. I have been trying to find information that could help me for this particular case, but I feel like I've reached a stumbling block now, so I would really like to ask the experts for some advice.
Thanks.
** Update **
After reading through the feedback and playing around some more, it looks like if I modified the lambda expression, it would achieve what I am looking for. In fact, I didn't even need to create the partial function:
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
yield (lambda: self._do_test)
The only downside to this approach is that the test name will not change. As I am playing around more, it looks like in nosetests, when a test generator is used, it would actually change the test name in the result based on the keywords it contains. Same thing was happening when I was using the lambda expression with a parameter.
For example:
Using lamdba expression with a parameter:
yield (lambda x: self._do_test, "value1")
In nosetests plugin, when you access the test case name, it would be displayed as "sample_test_generator(value1)
Using lambda expression without a parameter:
yield (lambda: self._do_test)
The test case name in this case would be "sample_test_generator". In my example above, if there are multiple values in the dictionary, then the yield call would occur multiple times. However, the test name would always remain as "sample_test_generator". This is not as bad as when I would get the unique test names, but then not be able to store the attribute values at all. I will keep playing around, but thanks for the feedback so far!
EDIT
I forgot to come back and provide my final update on how I was able to get this to work in the end, there was a little confusion on my part at first, and after I looked through it some more, I figured out that it had to do with how the tests are recognized:
My original implementation assumed that every test that gets picked up for execution goes through the "wantMethod" call from the plugin's base class. This is not true when "yield" is used to generate the test, because at this point, the test method has already passed the "wantMethod" call.
However, once the test case is generated through the "yeild" call, it does go through the "startTest" call from the plug-in base class, and this is where I was finally able to store the attribute successfully.
So in a nut shell, my test execution order looked like this:
nose -> wantMethod(method_name) -> yield -> startTest(yielded_test_name)
In my override of the startTest method, I have the following:
def startTest(self, test):
# If a test is spawned by using the 'yield' keyword, the test names would be the parent test name, appended by the '(' character
# example: If the parent test is "smoke_test", the generated test from yield would be "smoke_test('input')
parent_test_name = test_name.split('(')[0]
if self.method_attributes.has_key(test_name):
self._test_attrib = self.method_attributes[test_name]
elif self.method_attributes.has_key(parent_test_name):
self._test_attrib = self.method_attributes[parent_test_name]
else:
self._test_attrib = None
With this implementation, along with my overide of wantMethod, each test spawned by the parent test case also inherits attributes from the parent method, which is what I needed.
Again, thanks to all who send replies. This was quite a learning experience.
Would this fix your name issue?
def _actual_test(x, y):
assert x == y
def test_yield():
_actual_test.description = "test_yield_%s_%s" % (5, 5)
yield _actual_test, 5, 5
_actual_test.description = "test_yield_%s_%s" % (4, 8) # fail
yield _actual_test, 4, 8
_actual_test.description = "test_yield_%s_%s" % (2, 2)
yield _actual_test, 2, 2
Rename survives #attr too.
does this work?
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
def get_f(f, key):
return lambda x: f(), key
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield get_f(f, key)
def _do_test(self, input1, input2):
# Some code
The Problem ist that the local variables change after you created the lambda.
I am using a block like this:
def served(fn) :
def wrapper(*args, **kwargs):
p = xmlrpclib.ServerProxy(SERVER, allow_none=True )
return (p.__getattr__(fn.__name__)(*args, **kwargs)) # do the function call
return functools.update_wrapper(wrapper,fn)
#served
def remote_function(a, b):
pass
to wrap a series of XML-RPC calls into a python module. The "served" decorator gets called on stub functions to expose operations on a remote server.
I'm creating stubs like this with the intention of being able to inspect them later for information about the function, specifically its arguments.
As listed, the code above does not transfer argument information from the original function to the wrapper. If I inspect with inspect.getargspec( remote_function ) then I get essentially an empty list, instead of args=['a','b'] that I was expecting.
I'm guessing I need to give additional direction to the functools.update_wrapper() call via the optional assigned parameter, but I'm not sure exactly what to add to that tuple to get the effect I want.
The name and the docstring are correctly transferred to the new function object, but can someone advise me on how to transfer argument definitions?
Thanks.
Previous questions here and here suggest that the decorator module can do this.