There's a function which takes optional arguments.
def alpha(p1="foo", p2="bar"):
print('{0},{1}'.format(p1, p2))
Let me iterate over what happens when we use that function in different ways:
>>> alpha()
foo,bar
>>> alpha("FOO")
FOO,bar
>>> alpha(p2="BAR")
foo,BAR
>>> alpha(p1="FOO", p2=None)
FOO,None
Now consider the case where I want to call it like alpha("FOO", myp2) and myp2 will either contain a value to be passed, or be None. But even though the function handles p2=None, I want it to use its default value "bar" instead.
Maybe that's worded confusingly, so let me reword that:
If myp2 is None, call alpha("FOO"). Else, call alpha("FOO", myp2).
The distinction is relevant because alpha("FOO", None) has a different result than alpha("FOO").
How can I concisely (but readably) make this distinction?
One possibility would usually be to check for None within alpha, which would be encouraged because that would make the code safer. But assume that alpha is used in other places where it is actually supposed to handle None as it does.
I'd like to handle that on the caller-side.
One possibility is to do a case distinction:
if myp2 is None:
alpha("FOO")
else:
alpha("FOO", myp2)
But that can quickly become much code when there are multiple such arguments. (exponentially, 2^n)
Another possibility is to simply do alpha("FOO", myp2 or "bar"), but that requires us to know the default value. Usually, I'd probably go with this approach, but I might later change the default values for alpha and this call would then need to be updated manually in order to still call it with the (new) default value.
I am using python 3.4 but it would be best if your answers can provide a good way that works in any python version.
The question is technically finished here, but I reword some requirement again, since the first answer did gloss over that:
I want the behaviour of alpha with its default values "foo", "bar" preserved in general, so it is (probably) not an option to change alpha itself.
In yet again other words, assume that alpha is being used somewhere else as alpha("FOO", None) where the output FOO,None is expected behaviour.
Pass the arguments as kwargs from a dictionary, from which you filter out the None values:
kwargs = dict(p1='FOO', p2=None)
alpha(**{k: v for k, v in kwargs.items() if v is not None})
But assume that alpha is used in other places where it is actually supposed to handle None as it does.
To respond to this concern, I have been known to have a None-like value which isn't actually None for this exact purpose.
_novalue = object()
def alpha(p1=_novalue, p2=_novalue):
if p1 is _novalue:
p1 = "foo"
if p2 is _novalue:
p2 = "bar"
print('{0},{1}'.format(p1, p2))
Now the arguments are still optional, so you can neglect to pass either of them. And the function handles None correctly. If you ever want to explicitly not pass an argument, you can pass _novalue.
>>> alpha(p1="FOO", p2=None)
FOO,None
>>> alpha(p1="FOO")
FOO,bar
>>> alpha(p1="FOO", p2=_novalue)
FOO,bar
and since _novalue is a special made-up value created for this express purpose, anyone who passes _novalue is certainly intending the "default argument" behavior, as opposed to someone who passes None who might intend that the value be interpreted as literal None.
although ** is definitely a language feature, it's surely not created for solving this particular problem. Your suggestion works, so does mine. Which one works better depends on the rest of the OP's code. However, there is still no way to write f(x or dont_pass_it_at_all)
- blue_note
Thanks to your great answers, I thought I'd try to do just that:
# gen.py
def callWithNonNoneArgs(f, *args, **kwargs):
kwargsNotNone = {k: v for k, v in kwargs.items() if v is not None}
return f(*args, **kwargsNotNone)
# python interpreter
>>> import gen
>>> def alpha(p1="foo", p2="bar"):
... print('{0},{1}'.format(p1,p2))
...
>>> gen.callWithNonNoneArgs(alpha, p1="FOO", p2=None)
FOO,bar
>>> def beta(ree, p1="foo", p2="bar"):
... print('{0},{1},{2}'.format(ree,p1,p2))
...
>>> beta('hello', p2="world")
hello,foo,world
>>> beta('hello', p2=None)
hello,foo,None
>>> gen.callWithNonNoneArgs(beta, 'hello', p2=None)
hello,foo,bar
This is probably not perfect, but it seems to work: It's a function that you can call with another function and it's arguments, and it applies deceze's answer to filter out the arguments that are None.
You could inspect the default values via alpha.__defaults__ and then use them instead of None. That way you circumvent the hard-coding of default values:
>>> args = [None]
>>> alpha('FOO', *[x if x is not None else y for x, y in zip(args, alpha.__defaults__[1:])])
I had the same problem when calling some Swagger generated client code, which I couldn't modify, where None could end up in the query string if I didn't clean up the arguments before calling the generated methods. I ended up creating a simple helper function:
def defined_kwargs(**kwargs):
return {k: v for k, v in kwargs.items() if v is not None}
>>> alpha(**defined_kwargs(p1="FOO", p2=None))
FOO,bar
It keeps things quite readable for more complex invocations:
def beta(a, b, p1="foo", p2="bar"):
print('{0},{1},{2},{3}'.format(a, b, p1, p2,))
p1_value = "foo"
p2_value = None
>>> beta("hello",
"world",
**defined_kwargs(
p1=p1_value,
p2=p2_value))
hello,world,FOO,bar
I'm surprised nobody brought this up
def f(p1="foo", p2=None):
p2 = "bar" if p2 is None else p2
print(p1+p2)
You assign None to p2 as standart (or don't, but this way you have the true standart at one point in your code) and use an inline if. Imo the most pythonic answer. Another thing that comes to mind is using a wrapper, but that would be way less readable.
EDIT:
What I'd probably do is use a dummy as standart value and check for that. So something like this:
class dummy():
pass
def alpha(p1="foo", p2=dummy()):
if isinstance(p2, dummy):
p2 = "bar"
print("{0},{1}".format(p1, p2))
alpha()
alpha("a","b")
alpha(p2=None)
produces:
foo,bar
a,b
foo,None
Unfortunately, there's no way to do what you want. Even widely adopted python libraries/frameworks use your first approach. It's an extra line of code, but it is quite readable.
Do not use the alpha("FOO", myp2 or "bar") approach, because, as you mention yourself, it creates a terrible kind of coupling, since it requires the caller to know details about the function.
Regarding work-arounds: you could make a decorator for you function (using the inspect module), which checks the arguments passed to it. If one of them is None, it replaces the value with its own default value.
Not a direct answer, but I think this is worth considering:
See if you can break your function into several functions, neither of which has any default arguments. Factor any shared functionality out to a function you designate as internal.
def alpha():
_omega('foo', 'bar')
def beta(p1):
_omega(p1, 'bar')
def _omega(p1, p2):
print('{0},{1}'.format(p1, p2))
This works well when the extra arguments trigger "extra" functionality, as it may allow you to give the functions more descriptive names.
Functions with boolean arguments with True and/or False defaults frequently benefit from this type of approach.
Another possibility is to simply do alpha("FOO", myp2 or "bar"), but that requires us to know the default value. Usually, I'd probably go with this approach, but I might later change the default values for alpha and this call would then need to be updated manually in order to still call it with the (new) default value.
Just create a constant:
P2_DEFAULT = "bar"
def alpha(p1="foo", p2=P2_DEFAULT):
print('{0},{1}'.format(p1, p2))
and call the function:
alpha("FOO", myp2 or P2_DEFAULT)
If default values for alpha will be changed, we have to change only one constant.
Be careful with logical or for some cases, see https://stackoverflow.com/a/4978745/3605259
One more (better) use case
For example, we have some config (dictionary). But some values are not present:
config = {'name': 'Johnny', 'age': '33'}
work_type = config.get('work_type', P2_DEFAULT)
alpha("FOO", work_type)
So we use method get(key, default_value) of dict, which will return default_value if our config (dict) does not contain such key.
As I cannot comment on answers yet, I'd like to add that the first solution (unpacking the kwargs) would fit nicely in a decorator as follows:
def remove_none_from_kwargs(func):
#wraps(func)
def wrapper(self, *args, **kwargs):
func(self,*args, **{k: v for k, v in kwargs.items() if v is not None})
return wrapper
The expression logic=_changelog_txt:
def writeChangelog(repo, milestone, overwrite=False, extension=u'.txt',
logic=_changelog_txt): # HERE
"""Write 'Changelog - <milestone>.txt'"""
outFile = _outFile(dir_=CHANGELOGS_DIR,
name=u'Changelog - ' + milestone.title + extension)
if os.path.isfile(outFile) and not overwrite: return outFile
issues = getClosedIssues(repo, milestone, skip_labels=SKIP_LABELS)
return logic(issues, milestone, outFile)
def writeChangelogBBcode(repo, milestone, overwrite=False):
"""Write 'Changelog - <milestone>.bbcode.txt'"""
return writeChangelog(repo, milestone, overwrite, extension=u'.bbcode.txt',
logic=_changelog_bbcode) # no errors here
def _changelog_txt(issues, milestone, outFile):
with open(outFile, 'w') as out:
out.write(h2(_title(milestone)))
out.write('\n'.join(ul(issues, closedIssue)))
out.write('\n')
return outFile
gives me Unresolved reference \_changelog\_txt. What is the most pythonic way to do what I want ? See also: What is the best way to pass a method (with parameters) to another method in python
This is a matter of order. As _changelog_txt is not yet defined when you define function `writeChangelog, it throws an error.
This works:
def b(s):
print s
def a(f=b):
f("hello")
a()
This does not:
def a(f=b):
f("hello")
def b(s):
print s
a()
It should be noted that this has nothing to do with the keyword argument default value being a function. It could be any other object undefined before defining the function. There is no such thing as _changelog_txt when the interpreter encounters the def writeChangelog.
Reordering the code is a good alternative in this case.
The situation during runtime is different, as before running anything the interpreter has already encountered all defs. That is why one seldom bumps into this kind of problems with python.
As an addition to DrV's answer:
In Python, a function's signature is evaluated when the interpreter sees it for the first time and not at the time of calling. So, in terms of scope, your code is equivalent to the following:
b = a
a = 1
Output:
b = a
NameError: name 'a' is not defined
I hope you understand, now, why your code doesn't work.
On a side note: While this behavior makes the scope in which default parameter expressions are evaluated much more obvious, it is also an easy source of bugs, e.g.:
def foo(bar = []):
bar.append(1)
return bar
print(foo())
print(foo())
Output:
[1]
[1, 1]
Here, the default value is always the same list – across all calls of foo – because the function signature is evaluated only once. The solution is to use None as default value and do an explicit check:
def foo(bar = None):
if bar is None:
bar = []
bar.append(1)
return bar
print(foo())
print(foo())
Output:
[1]
[1]
I know how to override string class with:
class UserString:
def __str__(self):
return 'Overridden String'
if __name__ == '__main__':
print UserString()
But how can i use this class instead of built-in str class without defining implicitly UserString class?. To be clear
I want this:
>>> print "Boo boo!"
Overridden String
It is not possible. You have not overridden string class.
You cannot override classes. You can override methods. What you have done is defined a class and only overridden its str() method.
But you can do something like this...
def overriden_print(x):
print "Overriden in the past!"
from __future__ import print_function # import after the definition of overriden_print
print = overriden_print
print("Hello")
Output:
Overriden in the past!
It's impossible to do what you want without hacking the python executable itself... after all, str is a built-in type, and the interpreter, when passed 'string' type immediates, will always create built-in strings.
However... it is possible, using delegation, to do something like this. This is slightly modified from another stackoverflow recipe (which sadly, I did not include a link to in my code...), so if this is your code, please feel free to claim it :)
def returnthisclassfrom(specials):
specialnames = ['__%s__' % s for s in specials.split()]
def wrapit(cls, method):
return lambda *a: cls(method(*a))
def dowrap(cls):
for n in specialnames:
method = getattr(cls, n)
setattr(cls, n, wrapit(cls, method))
return cls
return dowrap
Then you use it like this:
#returnthisclassfrom('add mul mod')
class UserString(str):
pass
In [11]: first = UserString('first')
In [12]: print first
first
In [13]: type(first)
Out[13]: __main__.UserString
In [14]: second = first + 'second'
In [15]: print second
firstsecond
In [16]: type(second)
Out[16]: __main__.UserString
One downside of this is that str has no radd support, so 'string1' + UserString('string2') will give a string, whereas UserString('string1') + 'string2' gives a UserString. Not sure if there is a way around that.
Maybe not helpful, but hopefully it puts you on the right track.
im calling them keyed arrays because if i knew what they were called i could find the answer myself =-)
ok, for example:
parser = OptionParser(conflict_handler="resolve")
parser.add_option("-x", dest="var_x", help="")
parser.add_option("-y", dest="var_y", help="")
(options, args) = parser.parse_args()
generates an option object that can be used like this:
foobar = options.var_x
what are these called and where would i find some documentation on how to create and use them?
One class that does something very similar is namedtuple:
In [1]: from collections import namedtuple
In [2]: Point = namedtuple('Point', ['x', 'y'])
In [4]: p = Point(1, 2)
In [5]: p.x
Out[5]: 1
In [6]: p.y
Out[6]: 2
One poss is to wrap a dictionary in an object see below for class definition:
class Struct:
def __init__(self, **entries):
self.__dict__.update(entries)
Then just use a dictionary in constructor like so:
adictionary = {'dest':'ination', 'bla':2}
options = Struct(**adictionary)
options.dest
options.bla
options.dest will return 'ination'
options.bla would return 2
If you do help(options) at the interactive terminal, you'll see this is an optparse.Values instance. It's not intended for making your own things, really.
Using attribute access for key–value pairs is usually silly. Much of the time people who insist on it should really just be using a dict.
The main built-in way to do something along these lines is collections.namedtuple.
I have an unknown number of functions in my python script (well, it is known, but not constant) that start with site_...
I was wondering if there's a way to go through all of these functions in some main function that calls for them.
something like:
foreach function_that_has_site_ as coolfunc
if coolfunc(blabla,yada) == true:
return coolfunc(blabla,yada)
so it would go through them all until it gets something that's true.
thanks!
The inspect module, already mentioned in other answers, is especially handy because you get to easily filter the names and values of objects you care about. inspect.getmembers takes two arguments: the object whose members you're exploring, and a predicate (a function returning bool) which will accept (return True for) only the objects you care about.
To get "the object that is this module" you need the following well-known idiom:
import sys
this_module = sys.modules[__name__]
In your predicate, you want to select only objects which are functions and have names that start with site_:
import inspect
def function_that_has_site(f):
return inspect.isfunction(f) and f.__name__.startswith('site_')
With these two items in hand, your loop becomes:
for n, coolfunc in inspect.getmembers(this_module, function_that_has_site):
result = coolfunc(blabla, yada)
if result: return result
I have also split the loop body so that each function is called only once (which both saves time and is a safer approach, avoiding possible side effects)... as well as rewording it in Python;-)
Have you tried using the inspect module?
http://docs.python.org/library/inspect.html
The following will return the methods:
inspect.getmembers
Then you could invoke with:
methodobjToInvoke = getattr(classObj, methodName)
methodobj("arguments")
This method goes through all properties of the current module and executes all functions it finds with a name starting with site_:
import sys
import types
for elm in dir():
f = getattr(sys.modules[__name__], elm)
if isinstance(f, types.FunctionType) and f.__name__[:5] == "site_":
f()
The function-type check is unnecessary if only functions are have names starting with site_.
def run():
for f_name, f in globals().iteritems():
if not f_name.startswith('site_'):
continue
x = f()
if x:
return x
It's best to use a decorator to enumerate the functions you care about:
_funcs = []
def enumfunc(func):
_funcs.append(func)
return func
#enumfunc
def a():
print 'foo'
#enumfunc
def b():
print 'bar'
#enumfunc
def c():
print 'baz'
if __name__ == '__main__':
for f in _funcs:
f()
Try dir(), globals() or locals(). Or inspect module (as mentioned above).
def site_foo():
pass
def site_bar():
pass
for name, f in globals().items():
if name.startswith("site_"):
print name, f()