Is it correct to pass None to a parameter? - python

I am trying to understand if it is a good idea or not to pass as parameter the python equivalent of null; which I believe is None.
Example: You have a function that accepts n parameters; in one case I need just the first and second parameters, so instead of writing a long function definition with args and kwargs, and manipulate them, I can just pass null to one of the parameters.
def myfunct(a, b, c[optional], d[optional], e, f....n):
[do something]
if d=="y":
[do something but use only a and b]
Execution:
myfunct(a, b, c, d, .....n) #OK!
myfunct(a, b, None, "y", None,....n) #OK?
This theoretically should not raise an error, since null is a value I believe (this is not C++), although I am not sure if this is a correct way to do things.
The function knows that there is a condition when one of the parameters is a specific value, and in that case, it won't ask for any other parameter but 1; so the risk of using null should be practically 0.
Is this acceptable or am I potentially causing issues down the road, using this approach?

There's nothing wrong with using None to mean "I am not supplying this argument".
You can check for None in your code:
if c is None:
# do something
if d is not None:
# do something else
One recommendation I would make is to have None be the default argument for any optional arguments:
def myfunct(a, b, e, f, c=None, d=None):
# do something
myfunct(A, B, E, F)

Related

python interface with optional variable [duplicate]

I have a Python function which takes several arguments. Some of these arguments could be omitted in some scenarios.
def some_function (self, a, b, c, d = None, e = None, f = None, g = None, h = None):
#code
The arguments d through h are strings which each have different meanings. It is important that I can choose which optional parameters to pass in any combination. For example, (a, b, C, d, e), or (a, b, C, g, h), or (a, b, C, d, e, f, or all of them (these are my choices).
It would be great if I could overload the function - but I read that Python does not support overloading. I tried to insert some of the required int arguments in the list - and got an argument mismatch error.
Right now I am sending empty strings in place of the first few missing arguments as placeholders. I would like to be able to call a function just using actual values.
Is there any way to do this? Could I pass a list instead of the argument list?
Right now the prototype using ctypes looks something like:
_fdll.some_function.argtypes = [c_void_p, c_char_p, c_int, c_char_p, c_char_p, c_char_p, c_char_p, c_char_p]
Just use the *args parameter, which allows you to pass as many arguments as you want after your a,b,c. You would have to add some logic to map args->c,d,e,f but its a "way" of overloading.
def myfunc(a,b, *args, **kwargs):
for ar in args:
print ar
myfunc(a,b,c,d,e,f)
And it will print values of c,d,e,f
Similarly you could use the kwargs argument and then you could name your parameters.
def myfunc(a,b, *args, **kwargs):
c = kwargs.get('c', None)
d = kwargs.get('d', None)
#etc
myfunc(a,b, c='nick', d='dog', ...)
And then kwargs would have a dictionary of all the parameters that are key valued after a,b
Try calling it like: obj.some_function( '1', 2, '3', g="foo", h="bar" ). After the required positional arguments, you can specify specific optional arguments by name.
It is very easy just do this
def foo(a = None):
print(a)
Instead of None you can type anything that should be in place if there was no argument for example if you will not write the value of the parameter like this foo() then it will print None because no argument is given and if you will GIVE it an argument like foo("hello world") then it will print hello world... oh well I just forgot to tell y'all that these types of parameters i.e optional parameters, need to be behind all the other parameters. This means that, let's take the previous function and add another parameter b
def foo(a = None, b):
print(a)
Now if you'll execute your python file it is going to raise an exception saying that Non-default arguments follow default arguments,
SyntaxError: non-default argument follows default argument
so you gotta put the optional or non-default argument after the arguments which are required
which means
def foo (a, b=None): ... #This one is right
def foo(b=None, a): ... #and this isn't
Required parameters first, optional parameters after. Optional parameters always with a =None.
Easy and fast example:
def example_function(param1, param2, param3=None, param4=None):
pass
# Doesn't work, param2 missing
example_function("hello")
# Works
example_function("hello", "bye")
# Works. Both the same
example_function("hello", "bye", "hey")
example_function("hello", "bye", param3="hey")
# Works. Both the same
example_function("hello", "bye", "hey", "foo")
example_function("hello", "bye", param3="hey", param4="foo")
Check this:
from typing import Optional
def foo(a: str, b: Optional[str] = None) -> str or None:
pass
To get a better sense of what's possible when passing parameters it's really helpful to refer to the various options: positional-or-keyword (arg or arg="default_value"), positional-only (before /, in the parameter list), keyword-only (after *, in the parameter list), var-positional (typically *args) or var-keyword (typically **kwargs). See the Python documentation for an excellent summary; the various other answers to the question make use of most of these variations.
Since you always have parameters a, b, c in your example and you appear to call them in a positional manner, you could make this more explicit by adding /,,
def some_function (self, a, b, c, /, d = None, e = None, f = None, g = None, h = None):
#code
To make AviĆ³n's answer work for vector argument inputs;
def test(M,v=None):
try:
if (v==None).all() == False:
print('argument passed')
return M + v
except:
print('no argument passed')
return M
Where M is some matrix and v some vector. Both test(M) and test(M,v) produce errors when I attempted to use if statements without using 'try/ except' statements.
As mentioned by cem, upgrading to python 3.10 would allow the union (x|y) (or the Optional[...])functionality which might open some doors for alternative methods, but I'm using Anaconda spyder so I think I have to wait for a new release to use python 3.10.

Best way to undo previous steps in a series of steps

I'm trying to find a better way to execute the following functions. I have a series of steps that need to be completed, and if any fail, I need to undo the previous step like so:
try:
A = createA()
except:
return None
try:
B = createB(A)
except:
deleteA(A)
return None
try:
C = createC(B)
except:
deleteB(B)
deleteA(A)
return None
try:
D = createD(C)
except:
deleteC(C)
deleteB(B)
deleteA(A)
return None
return D
I would prefer not to repeat myself if possible. How can I improve this? Is there a known pattern to follow?
One thing I have considered would be adding deleteB() to deleteC(), and deleteA() to deleteB(). Is that the best possible way to do it?
If you look some design patterns, check following:
Command pattern
It is probably what you are looking. Also, it commands can have "undo" action. Check the following question also, if it contains a similar problem that you have. best design pattern for undo feature
As the comments point out, this is what the context manager protocol is for. However, if you don't want to dig into what is a fairly advanced feature yet, then you can define lambda anonymous functions as you go, to remember what to tidy ...
try:
deleters = []
A = createA()
deleters.append( lambda: deleteA(A) )
B = createB( A)
deleters.append( lambda: deleteB(B) )
C = createC( B)
deleters.append( lambda: deleteC(C) )
D = createD( C)
return D
except:
for d in reversed( deleters):
d()
return None
It depends on what exactly "undo" means. E.g. if A, B, C, D etc are arbitrary user commands and there may be a large number of them in any order, then likely you need to write some sort of abstraction around do/undo.
If alternatively A, B, C, D etc are resources that need tidying up (database connections, files, etc), then context managers may be more appropriate.
Or maybe A, B, C D are some other sort of thing altogether.
A brief bit of code using context managers:
class A:
def __enter__(self):
set things up
return thing
def __exit__(self, type, value, traceback):
tear things down
class B, C and D are similar
def doSomethingThatNeedsToUseD():
try:
with A() as a:
with B() as b:
with C() as c:
with D() as d:
d.doSomething()
except:
print("error")
What you are looking for is called memento pattern. It is one of the GoF design patterns:
Without violating encapsulation, capture and externalize an object's internal state so that the object can be restored to this state later.
The one way how it could be implemented using python could be find here.

python: return an existing object rather than creating a new object conditionally

My specific situation is as follows: I have an object that takes some arguments, say a, b, c, and d. What I want to happen when I create a new instance of this object is that it checks in a dictionary for the tuple (a,b,c,d), and if this key exists then it returns an existing instance created with arguments a, b, c and d. Otherwise, it will create a new one with arguments a, b, c and d, add it to the dictionary with the key (a,b,c,d), and then return this object.
The code for this isn't complicated, but I don't know where to put it - clearly it can't go in the __init__ method, because assigning to self won't change it, and at this point the new instance has already been made. The problem is that I simply don't know enough about the creation of object instances, and how to do something other than create a new one.
The purpose is to prevent redundancy to save memory in my case; a lot of objects will be made, many of which should be identical because they have the same arguments. They will be immutable, so there would be no danger in changing one of them and affecting the rest. If anyone can give me a way of implementing this, or indeed has a better way than what I have asked that solves the problem, I would appreciate it.
The class is something like:
class X:
dct = {}
def __init__(self, a, b, c, d):
self.a = a
self.b = b
self.c = c
self.d = d
and somewhere I need the code:
if (a,b,c,d) in X.dct:
return X.dct[(a,b,c,d)]
else:
obj = X(a,b,c,d)
X.dct[(a,b,c,d)] = obj
return obj
and I want this code to run when I do something like:
x = X(a,b,c,d)

Generate python function with different arguments

Background
I have a function that takes a number of parameters and returns an error measure which I then want to minimize (using scipy.optimize.leastsq, but that is beside the point right now).
As a toy example, let's assume my function to optimize take the four parameters a,b,c,d:
def f(a,b,c,d):
err = a*b - c*d
return err
The optimizer then want a function with the signature func(x, *args) where x is the parameter vector.
That is, my function is currently written like:
def f_opt(x, *args):
a,b,c,d = x
err = a*b - c*d
return err
But, now I want to do a number of experiments where I fix some parameters while keeping some parameters free in the optimization step.
I could of course do something like:
def f_ad_free(x, b, c):
a, d = x
return f(a,b,c,d)
But this will be cumbersome since I have over 10 parameters which means the combinations of different numbers of free-vs-fixed parameters will potentially be quite large.
First approach using dicts
One solution I had was to write my inner function f with keyword args instead of positional args and then wrap the solution like this:
def generate(func, all_param, fixed_param):
param_dict = {k : None for k in all_param}
free_param = [param for param in all_param if param not in fixed_param]
def wrapped(x, *args):
param_dict.update({k : v for k, v in zip(fixed_param, args)})
param_dict.update({k : v for k, v in zip(free_param, x)})
return func(**param_dict)
return wrapped
Creating a function that fixes 'b' and 'c' then turns into the following:
all_params = ['a','b','c']
f_bc_fixed = generate(f_inner, all_params, ['b', 'c'])
a = 1
b = 2
c = 3
d = 4
f_bc_fixed((a,d), b, c)
Question time!
My question is whether anyone can think of a neater way solve this. Since the final function is going to be run in an optimization step I can't accept too much overhead for each function call.
The time it takes to generate the optimization function is irrelevant.
I can think of several ways to avoid using a closure as you do above, though after doing some testing, I'm not sure either of these will be faster. One approach might be to skip the wrapper and just write a function that accepts
A vector
A list of free names
A dictionary mapping names to values.
Then do something very like what you do above, but in the function itself:
def f(free_vals, free_names, params):
params.update(zip(free_names, free_vals))
err = params['a'] * params['b'] - params['c'] * params['d']
return err
For code that uses variable names multiple times, make vars local up front, e.g.
a = params['a']
b = params['b']
and so on. This might seem cumbersome, but it has the advantage of making everything explicit, avoiding the kinds of namespace searches that could make closures slow.
Then pass a list of free names and a dictionary of fixed params via the args parameter to optimize.leastsq. (Note that the params dictionary is mutable, which means that there could be side effects in theory; but in this case it shouldn't matter because only the free params are being overwritten by update, so I omitted the copy step for the sake of speed.)
The main downsides of this approach are that it shifts some complexity into the call to optimize.leastsq, and it makes your code less reusable. A second approach avoids those problems though it might not be quite as fast: using a callable class.
class OptWrapper(object):
def __init__(self, func, free_names, **fixed_params):
self.func = func
self.free_names = free_names
self.params = fixed_params
def __call__(self, x, *args):
self.params.update(zip(self.free_names, x))
return self.func(**self.params)
You can see that I simplified the parameter structure for __init__; the fixed params are passed here as keyword arguments, and the user must ensure that free_names and fixed_params don't have overlapping names. I think the simplicity is worth the tradeoff but you can easily enforce the separation between the two just as you did in your wrapper code.
I like this second approach best; it has the flexibility of your closure-based approach, but I find it more readable. All the names are in (or can be accessed through) the local namespace, which I thought that would speed things up -- but after some testing I think there's reason to believe that the closure approach will still be faster than this; accessing the __call__ method seems to add about 100 ns per call of overhead. I would strongly recommend testing if performance is a real issue.
Your generate function is basically the same as functools.partial, which is what I would use here.

Deal with undefined arguments more elegantly

The accepted paradigm to deal with mutable default arguments is:
def func(self, a = None):
if a is None:
a = <some_initialisation>
self.a = a
As I might have to do this for several arguments, I would need to write very similar 3 lines over and over again. I find this un-pythonically a lot of text to read for a very very standard thing to do when initialising class objects or functions.
Isn't there an elegant one-liner to replace those 3 lines dealing with the potentially undefined argument and the standard required copying to the class instance variables?
If a "falsy" value (0, empty string, list, dict, etc.) is not a valid value for a, then you can cut down the initialization to one line:
a = a or <initialize_object>
Another way of doing the same thing is as follows:
def func(self,**kwargs):
self.a=kwargs.get('a',<a_initialization>)
...
This has the added bonus that the value of a passed to the function could be None and the initialization won't overwrite it. The disadvantage is that a user using the builtin help function won't be able to tell what keywords your function is looking for unless you spell it out explicitly in the docstring.
EDIT
One other comment. The user could call the above function with keywords which are not pulled out of the kwargs dictionary. In some cases, this is good (if you want to pass the keywords to another function for instance). In other cases, this is not what you want. If you want to raise an error if the user provides an unknown keyword, you can do the following:
def func(self,**kwargs):
self.a=kwargs.pop('a',"Default_a")
self.b=kwargs.pop('b',"Default_b")
if(kwargs):
raise ... #some appropriate exception...possibly using kwargs.keys() to say which keywords were not appropriate for this function.
You could do this
def func(self, a=None):
self.a = <some_initialisation> if a is None else a
But why the obsession with one liners? I would usually use the 3 line version even if it gets repeated all over the place because if makes your code very easy for experienced Python programmers to read
just a little solution I came up by using an extra function, can be improved of course:
defaultargs.py:
def doInit(var, default_value,condition):
if condition:
var = default_value
return var
def func(a=None, b=None, c=None):
a = doInit(a,5,(a is None or not isinstance(a,int)))
b = doInit(b,10.0,(a is None or not isinstance(a,float)))
c = doInit(c,"whatever",(a is None or not isinstance(c, str)))
print a
print b
print c
if __name__ == "__main__":
func(10)
func(None,12341.12)
func("foo",None,"whowho")
output:
10
10.0
whatever
5
10.0
whatever
5
10.0
whowho
I like your question. :)
Edit: If you dont care about the variables type, please dont use isinstance().

Categories

Resources