I've sometimes seen code with kwarg=kwarg in one of the functions as shown below:
def func1(foo, kwarg):
return(foo+kwarg)
def func2(bar, kwarg):
return(func1(bar*2, kwarg=kwarg))
print(func2(4,5))
I've normally tried to avoid this notation (e.g. by using kwarg1=kwarg2) in order to avoid any possible bugs, but is this actually necessary?
There's nothing wrong with it - in this case kwarg is just a variable name - it's not reserved. There may be a bit of confusion with it though, since def func(**kwargs): is the common syntax for creating a dictionary of all the "key word arguments" that are passed into the function. Since you're not doing that here, using such a similar name is unnecessarily confusing. Although it's not clear you're talking about using that exact name, so maybe this is just an issue with the example.
But broadly speaking, passing something=something is fairly common practice. You'll see it in lots of places, for example if you're iterating through a color pallette in Matplotlib, you might pass color=color into plot, or if you're building a list of headers in Pandas you might pass coloumns=columns into DataFrame.
Bottom line is it should be clear. If it is, it's good. If it's not, it isn't.
Related
In other languages, like Java, you can do something like this:
String path;
if (exists(path = "/some/path"))
my_path = path;
the point being that path is being set as part of specifying a parameter to a method call. I know that this doesn't work in Python. It is something that I've always wished Python had.
Is there any way to accomplish this in Python? What I mean here by "accomplish" is to be able to write both the call to exists and the assignment to path, as a single statement with no prior supporting code being necessary.
I'll be OK with it if a way of doing this requires the use of an additional call to a function or method, including anything I might write myself. I spent a little time trying to come up with such a module, but failed to come up with anything that was less ugly than just doing the assignment before calling the function.
UPDATE: #BrokenBenchmark's answer is perfect if one can assume Python 3.8 or better. Unfortunately, I can't yet do that, so I'm still searching for a solution to this problem that will work with Python 3.7 and earlier.
Yes, you can use the walrus operator if you're using Python 3.8 or above:
import os
if os.path.isdir((path := "/some/path")):
my_path = path
I've come up with something that has some issues, but does technically get me where I was looking to be. Maybe someone else will have ideas for improving this to make it fully cool. Here's what I have:
# In a utility module somewhere
def v(varname, arg=None):
if arg is not None:
if not hasattr(v, 'vals'):
v.vals = {}
v.vals[varname] = arg
return v.vals[varname]
# At point of use
if os.path.exists(v('path1', os.path.expanduser('~/.harmony/mnt/fetch_devqa'))):
fetch_devqa_path = v('path1')
As you can see, this fits my requirement of no extra lines of code. The "variable" involved, path1 in this example, is stored on the function that implements all of this, on a per-variable-name basis.
One can question if this is concise and readable enough to be worth the bother. For me, the verdict is still out. If not for the need to call the v() function a second time, I think I'd be good with it structurally.
The only functional problem I see with this is that it isn't thread-safe. Two copies of the code could run concurrently and run into a race condition between the two calls to v(). The same problem is greatly magnified if one fails to choose unique variable names every time this is used. That's probably the deal killer here.
Can anyone see how to use this to get to a similar solution without the drawbacks?
I am trying to figure out the solution for the following problem:
#ExampleA.py
class a:
def my_great_method_A(self):
pass
#ExampleB.py
def functionX(inst_a): #Argument 'inst_a' will be always ExampleA.py's class a.
inst_a.my_great_method_A() #<---
I use Liclipse as a python editor. When I am typing the last line, "a.my_gr...", I want to have the editor's auto filling feature kicks in to suggest to use "my_great_method_A()". However, it actually does not suggest anything.
I understand why, because the editor doesn't have any clue if 'inst_a' is class 'a'. To deal with this issue, I could do the following to make the autofiller work:
#ExampleA.py
class a:
def my_great_method_A(self):
pass
#ExampleB.py
import ExampleA
def functionX(inst_a): #Argument 'inst_a' will be always ExampleA.py's class a.
ExampleA.a.my_great_method_A(inst_a) #<--- then autofilling works
However, for the code's readability, I would rather use the . format and I believe everyone the same way. But I do not know how everyone deals with this. Many times I have to go into the imported file and copy & paste the method name, which is tedious. Obviously I am missing something that everyone is aware of. By the way this is my first time to post on stackoverflow. I hope this is a valid thing to pose here.
LiClipse/PyDev can recognize type hints in docstrings (as explained in http://www.pydev.org/manual_adv_type_hints.html) or using the new PEP 484 type hints (https://www.python.org/dev/peps/pep-0484/)... So, if you use one of those, it should work.
Note: I personally like docstrings better, but it's probably a matter of taste and both should be recognizable by LiClipse/PyDev.
I don't know of a way to make this editor guess the type you're expecting. But since Python is untyped it will always only be guessing.
However notice your workaround of using the class explicit method is not a good practice. It will not allow you to pass extensions of ExampleA in the future in case your code will evolve some day.
So it's more than just readability
I'm asking about situations where if a wrong type of argument is passed to the function, it could:
Blow up the whole thing.
Return unexpected results
Return nothing
For instance, the function below expects the argument name to be a string. It would throw an exception for all other types that doesn't have a startswith method.
def fruits(name):
if name.startswith('O'):
print('Is it Orange?')
There are other cases where a function could halt or cause damage to the system if execution proceeds without type-checking. Whenever there are a lot of functions or functions with a lot of arguments, type checking is tedious and makes the code unreadable. So, is there a standard for doing this? As to 'how to type check' - there are plenty of examples here on stackexchange, but I couldn't find any about where it would be appropriate to do so.
Another example would be:
def fruits(names):
with open('important_file.txt', 'r+') as fil:
for name in names:
if name in fil:
# Edit the file
Here if the name is a string each character in it will influence the editing of the file. If it is any other iterable, each element provided by it would influence the editing. Both of these could produce different results.
So, when should we type-check an argument and should we not?
The answer off the top of my head would be: it depends where the input comes from.
If the functions are class methods that get invokes internally or things like that, you can assume the inputs are valid, because you wrote it!
For example
def add(x,y):
return x + y
def multiply(a,b):
product = 0
for i in range(a):
product = add(product, b)
return product
In my add function, I could check that there is a + operator for the parameters x and y. But since I wrote the multiply function, and that is the only function that uses add, it is safe to assume the inputs will be int because that's how I wrote it. Now that argument stands on shaky ground for large code bases where you (hopefully) have shared code, so you can't be sure people don't misuse your functions. But that's why you comment them well to describe the correct use of said function.
If it has to read from a file, get user input, etc, then you may want to do some validation first.
I almost never do type checking in Python. In accordance with Pythonic philosophy I assume that me and other programmers are adult people capable of reading the code (or at least the documentation) and using it properly. I assume that we test our code before we let it destroy something important. After all in most cases if you do something wrong, you'll just see an error and Python's error messages are quite informative most of the time.
The only occasion when I sometimes check types is when I want my function to behave differently depending on the argument's type. But although I sometimes feel compelled to do this, I don't consider it a good practice.
Most often it happens when my function iterates over a list of strings and I fear (or want) I could get a single string passed into it by accident - this won't throw an error at once because unfortunately string is an iterable too.
I have some functions in my code that accept either an object or an iterable of objects as input. I was taught to use meaningful names for everything, but I am not sure how to comply here. What should I call a parameter that can a sinlge object or an iterable of objects? I have come up with two ideas, but I don't like either of them:
FooOrManyFoos - This expresses what goes on, but I could imagine that someone not used to it could have trouble understanding what it means right away
param - Some generic name. This makes clear that it can be several things, but does explain nothing about what the parameter is used for.
Normally I call iterables of objects just the plural of what I would call a single object. I know this might seem a little bit compulsive, but Python is supposed to be (among others) about readability.
I have some functions in my code that accept either an object or an iterable of objects as input.
This is a very exceptional and often very bad thing to do. It's trivially avoidable.
i.e., pass [foo] instead of foo when calling this function.
The only time you can justify doing this is when (1) you have an installed base of software that expects one form (iterable or singleton) and (2) you have to expand it to support the other use case. So. You only do this when expanding an existing function that has an existing code base.
If this is new development, Do Not Do This.
I have come up with two ideas, but I don't like either of them:
[Only two?]
FooOrManyFoos - This expresses what goes on, but I could imagine that someone not used to it could have trouble understanding what it means right away
What? Are you saying you provide NO other documentation, and no other training? No support? No advice? Who is the "someone not used to it"? Talk to them. Don't assume or imagine things about them.
Also, don't use Leading Upper Case Names.
param - Some generic name. This makes clear that it can be several things, but does explain nothing about what the parameter is used for.
Terrible. Never. Do. This.
I looked in the Python library for examples. Most of the functions that do this have simple descriptions.
http://docs.python.org/library/functions.html#isinstance
isinstance(object, classinfo)
They call it "classinfo" and it can be a class or a tuple of classes.
You could do that, too.
You must consider the common use case and the exceptions. Follow the 80/20 rule.
80% of the time, you can replace this with an iterable and not have this problem.
In the remaining 20% of the cases, you have an installed base of software built around an assumption (either iterable or single item) and you need to add the other case. Don't change the name, just change the documentation. If it used to say "foo" it still says "foo" but you make it accept an iterable of "foo's" without making any change to the parameters. If it used to say "foo_list" or "foo_iter", then it still says "foo_list" or "foo_iter" but it will quietly tolerate a singleton without breaking.
80% of the code is the legacy ("foo" or "foo_list")
20% of the code is the new feature ("foo" can be an iterable or "foo_list" can be a single object.)
I guess I'm a little late to the party, but I'm suprised that nobody suggested a decorator.
def withmany(f):
def many(many_foos):
for foo in many_foos:
yield f(foo)
f.many = many
return f
#withmany
def process_foo(foo):
return foo + 1
processed_foo = process_foo(foo)
for processed_foo in process_foo.many(foos):
print processed_foo
I saw a similar pattern in one of Alex Martelli's posts but I don't remember the link off hand.
It sounds like you're agonizing over the ugliness of code like:
def ProcessWidget(widget_thing):
# Infer if we have a singleton instance and make it a
# length 1 list for consistency
if isinstance(widget_thing, WidgetType):
widget_thing = [widget_thing]
for widget in widget_thing:
#...
My suggestion is to avoid overloading your interface to handle two distinct cases. I tend to write code that favors re-use and clear naming of methods over clever dynamic use of parameters:
def ProcessOneWidget(widget):
#...
def ProcessManyWidgets(widgets):
for widget in widgets:
ProcessOneWidget(widget)
Often, I start with this simple pattern, but then have the opportunity to optimize the "Many" case when there are efficiencies to gain that offset the additional code complexity and partial duplication of functionality. If this convention seems overly verbose, one can opt for names like "ProcessWidget" and "ProcessWidgets", though the difference between the two is a single easily missed character.
You can use *args magic (varargs) to make your params always be iterable.
Pass a single item or multiple known items as normal function args like func(arg1, arg2, ...) and pass iterable arguments with an asterisk before, like func(*args)
Example:
# magic *args function
def foo(*args):
print args
# many ways to call it
foo(1)
foo(1, 2, 3)
args1 = (1, 2, 3)
args2 = [1, 2, 3]
args3 = iter((1, 2, 3))
foo(*args1)
foo(*args2)
foo(*args3)
Can you name your parameter in a very high-level way? people who read the code are more interested in knowing what the parameter represents ("clients") than what their type is ("list_of_tuples"); the type can be defined in the function documentation string, which is a good thing since it might change, in the future (the type is sometimes an implementation detail).
I would do 1 thing,
def myFunc(manyFoos):
if not type(manyFoos) in (list,tuple):
manyFoos = [manyFoos]
#do stuff here
so then you don't need to worry anymore about its name.
in a function you should try to achieve to have 1 action, accept the same parameter type and return the same type.
Instead of filling the functions with ifs you could have 2 functions.
Since you don't care exactly what kind of iterable you get, you could try to get an iterator for the parameter using iter(). If iter() raises a TypeError exception, the parameter is not iterable, so you then create a list or tuple of the one item, which is iterable and Bob's your uncle.
def doIt(foos):
try:
iter(foos)
except TypeError:
foos = [foos]
for foo in foos:
pass # do something here
The only problem with this approach is if foo is a string. A string is iterable, so passing in a single string rather than a list of strings will result in iterating over the characters in a string. If this is a concern, you could add an if test for it. At this point it's getting wordy for boilerplate code, so I'd break it out into its own function.
def iterfy(iterable):
if isinstance(iterable, basestring):
iterable = [iterable]
try:
iter(iterable)
except TypeError:
iterable = [iterable]
return iterable
def doIt(foos):
for foo in iterfy(foos):
pass # do something
Unlike some of those answering, I like doing this, since it eliminates one thing the caller could get wrong when using your API. "Be conservative in what you generate but liberal in what you accept."
To answer your original question, i.e. what you should name the parameter, I would still go with "foos" even though you will accept a single item, since your intent is to accept a list. If it's not iterable, that is technically a mistake, albeit one you will correct for the caller since processing just the one item is probably what they want. Also, if the caller thinks they must pass in an iterable even of one item, well, that will of course work fine and requires very little syntax, so why worry about correcting their misapprehension?
I would go with a name explaining that the parameter can be an instance or a list of instances. Say one_or_more_Foo_objects. I find it better than the bland param.
I'm working on a fairly big project now and we're passing maps around and just calling our parameter map. The map contents vary depending on the function that's being called. This probably isn't the best situation, but we reuse a lot of the same code on the maps, so copying and pasting is easier.
I would say instead of naming it what it is, you should name it what it's used for. Also, just be careful that you can't call use in on a not iterable.
Suppose I need to create my own small DSL that would use Python to describe a certain data structure. E.g. I'd like to be able to write something like
f(x) = some_stuff(a,b,c)
and have Python, instead of complaining about undeclared identifiers or attempting to invoke the function some_stuff, convert it to a literal expression for my further convenience.
It is possible to get a reasonable approximation to this by creating a class with properly redefined __getattr__ and __setattr__ methods and use it as follows:
e = Expression()
e.f[e.x] = e.some_stuff(e.a, e.b, e.c)
It would be cool though, if it were possible to get rid of the annoying "e." prefixes and maybe even avoid the use of []. So I was wondering, is it possible to somehow temporarily "redefine" global name lookups and assignments? On a related note, maybe there are good packages for easily achieving such "quoting" functionality for Python expressions?
I'm not sure it's a good idea, but I thought I'd give it a try. To summarize:
class PermissiveDict(dict):
default = None
def __getitem__(self, item):
try:
return dict.__getitem__(self, item)
except KeyError:
return self.default
def exec_with_default(code, default=None):
ns = PermissiveDict()
ns.default = default
exec code in ns
return ns
You might want to take a look at the ast or parser modules included with Python to parse, access and transform the abstract syntax tree (or parse tree, respectively) of the input code. As far as I know, the Sage mathematical system, written in Python, has a similar sort of precompiler.
In response to Wai's comment, here's one fun solution that I've found. First of all, to explain once more what it does, suppose that you have the following code:
definitions = Structure()
definitions.add_definition('f[x]', 'x*2')
definitions.add_definition('f[z]', 'some_function(z)')
definitions.add_definition('g.i', 'some_object[i].method(param=value)')
where adding definitions implies parsing the left hand sides and the right hand sides and doing other ugly stuff. Now one (not necessarily good, but certainly fun) approach here would allow to write the above code as follows:
#my_dsl
def definitions():
f[x] = x*2
f[z] = some_function(z)
g.i = some_object[i].method(param=value)
and have Python do most of the parsing under the hood.
The idea is based on the simple exec <code> in <environment> statement, mentioned by Ian, with one hackish addition. Namely, the bytecode of the function must be slightly tweaked and all local variable access operations (LOAD_FAST) switched to variable access from the environment (LOAD_NAME).
It is easier shown than explained: http://fouryears.eu/wp-content/uploads/pydsl/
There are various tricks you may want to do to make it practical. For example, in the code presented at the link above you can't use builtin functions and language constructions like for loops and if statements within a #my_dsl function. You can make those work, however, by adding more behaviour to the Env class.
Update. Here is a slightly more verbose explanation of the same thing.