I am new to PyTorch and while going through the examples, I noticed that sometimes functions have a different convention when accepting arguments. For example transforms.Compose receives a list as its argument:
transform=transforms.Compose([ # Here we pass a list of elements
transforms.ToTensor(),
transforms.Normalize(
(0.4915, 0.4823, 0.4468),
(0.2470, 0.2435, 0.2616)
)
]))
At the same time, other functions receive the arguments individually (i.e. not in a list). For example torch.nn.Sequential:
torch.nn.Sequential( # Here we pass individual elements
torch.nn.Linear(1, 4),
torch.nn.Tanh(),
torch.nn.Linear(4, 1)
)
This has been a common typing mistake for me while learning.
I wonder if we are implying something when:
the arguments are passed as a list
the arguments are passed as individual items
Or is it simply the preference of the contributing author and should be memorized as is?
Update 1: Note that I do not claim that either format is better. I am merely complaining about lack of consistency. Of course (as Ivan stated in his answer) it makes perfect sense to follow one format if there is a good reason for it (e.g. transforms.Normalize). But if there is not, then I would vote for consistency.
This is not a convention, it is a design decision.
Yes, torch.nn.Sequential (source) receives individual items, whereas torchvision.transforms.Compose (source) receives a single list of items. Those are arbitrary design choices. I believe PyTorch and Torchvision are maintained by different groups of people, which might explain the difference. One could argue it is more coherent to have the inputs passed as a list since it is as a varied length, this is the approach used in more conventional programming languages such as C++ and Java. On the other hand you could argue it is more readable to pass them as a sequence of separate arguments instead, which what languages such as Python.
In this particular case we would have
>>> fn1([element_a, element_b, element_c]) # single list
vs
>>> fn2(element_a, element_b, element_c) # separate args
Which would have an implementation that resembles:
def fn1(elements):
pass
vs using the star argument:
def fn2(*elements):
pass
However it is not always up to design decision, sometimes the implementation is clear to take. For instance, it would be much preferred to go the list approach when the function has other arguments (whether they are positional or keyword arguments). In this case it makes more sense to implement it as fn1 instead of fn2. Here I'm giving second example with keyword arguments. Look a the difference in interface for the first set of arguments in both scenarios:
>>> fn1([elemen_a, element_b], option_1=True, option_2=True) # list
vs
>>> fn2(element_a, element_b, option_1=True, option_2=True) # separate
Which would have a function header which looks something like:
def fn1(elements, option_1=False, option_2=False)
pass
While the other would be using a star argument under the hood:
def fn2(*elements, option_1=False, option_2=False)
pass
If an argument is positioned after the star argument it essentially forces the user to use it as a keyword argument...
Mentioning this you can check out the source code for both Compose and Sequential and you will notice how both only expect a list of elements and no additional arguments afterwards. So in this scenario, it might have been preferred to go with Sequential's approach using the star argument... but this is just personal preference!
Related
Lets say you're writing a child class that has a constructor that passes its unused kwargs up to the parent constructor, but your class has the argument x that it needs to store that shouldn't be passed to the parent.
I have seen two different approaches to this:
def __init__(self, **kwargs):
self.x = kwargs.pop('x', 'default')
super().__init__(**kwargs)
and
def __init__(self, x='default', **kwargs):
self.x = x
super().__init__(**kwargs)
Is there every any functional difference between these two constructors? Is there any reason to use one over the other?
The only difference I can see is that the second form, which defines x in the signature, allows the user to better see it as a possible argument, or an IDE to offer it as an autocomplete option. Or in Python 3.5+, you could add a type annotation to x. Does that make the first form objectively worse?
As already mentionned by Giacomo Alzetta in a comment, the second version allow to pass x as a positional argument when the first only allow named arguments, IOW with the second form you can use both Child(x=2) AND Child(2), while the first only supports Child(x=2).
Also, when using inspection to check the method's signature, the second form will clearly mention the existance of the x param, while the first won't.
And finally, the second version will yield a slightly clearer exception if x is not passed.
And that's for the functional differences.
Is there any reason to use one over the other?
Well... As a general rule, it's cleaner (best practice) to use explicit arguments whenever possible, even if only for readability, and from experience it does usually make maintenance easier indeed. So from this point of view, the second form can be seen as "objectively better" than the first.
This being said, when the parent method has dozens of mostly optional and rarely used arguments (django.forms.Form, I'm lookig at you) AND you also want to preserve positional arguments order, it can be convenient to just use the generic *args, **kwargs signature for the child and force the additional param(s) to be passed as kwargs. Assuming you clearly document this in the docstring, it's still explicit enough (as far as I'm concerned, YMMV), and also avoids a lot of clutter (you can have a look at django.forms.Form for a concrete example of what I mean here).
So as always with "best practices" and other golden rules, you have to understand and weight the pros and cons wrt/ the concrete case at hand.
PS: just to make things clear, django's Form class signature makes perfect sense so I'm not ranting here - it's just one of those cases where there's no "beautiful" solution to the problem, period.
Aside obvious differences in code clarity, there might be a little difference in speed of calling the function, in this case method init().
If you can, define all necessary arguments with default values if you have some, in both methods, and pass them classically, and exclude ones you do not wish.
In this way you make the code clear and speed of calls stays the same.
If you need some micro-optimization, then use timeit to check what works faster.
I expect that one with the "x" added as an argument will perhaps be a winner.
Because getting to its value directly from local variables will be faster and kwargs dict() is smaller.
When you use "normal" arguments, they are automatically inserted into the functions local variables dictionary.
When you use *args and/or **kwargs they are additional tuple() and/or dict() added as new local variables. They are first created from the arguments you passed into the function call.
When you are passing them to a next function, they are extracted
to match that function's call signature. In both operations you lose a tiny bit of speed.
If you add removing something from the kwargs dictionary, ( x = kwargs.pop("x") ), you also lose some speed.
By observing both codes, it seems that their call speed would be equal. But you should check. If you do not need an extra 0.000001 seconds when initializing your instances, then both options are fine and just choose what you like most.
But again, if you are free to do it, and if it will not greatly impair the code's maintenance, define all arguments and their default values and pass them on one-by-one.
Implementing some Neural Network with tensorflow, I've faced a method which parameters have took my attention. I'm talking about tf.nn.sigmoid_cross_entropy_with_logits (Documentation here).
The first parameter it receives as first parameter _sentinel=None which, according to the documentation:
_sentinel: Used to prevent positional parameters. Internal, do not use.
I understand that by having this parameter, next ones have to be named instead of positional is this one don't have to be used, but my question is. In which cases does prevent positional parameters have some benefit? What is their main goal to use this? Because I could also run
tf.nn.sigmoid_cross_entropy_with_logits(None, my_labels, my_logits)
being all arguments positional. Anyway, I want to clarify that my question is not focused in TensorFlow, it's just the example that I have found.
Positional parameters couple the caller and receiver on the order of the parameters. It makes refactoring the order of the reciver's parameters more difficult.
For example, if I have
def foo(a, b, c):
do_stuff(a,b,c)
and I decide, for reasons, perhaps I want to make a partial function or whatever, that it would be better to have
def foo(b, a, c):
do_stuff(a,b,c)
But now I have callers in the wild and it would be very rude to change my contract, so I'm stuck.
Sandi Metz in Practical Object-Oriented Design in Ruby also addresses this. (I know this is python, but oop is oop)
When the code [is changed to use keyword arguments], it lost its dependency
on argument order but it gained a dependency on the names of the keys
in the [keyword arguments]. This change is healthy. The new dependency is
more stable than the old, and thus this code faces less risk of being
forced to change. Additionally, and perhaps unexpectedly, the [keywords]
provides one new, secondary benefit: The key names in the hash furnish
explicit documentation about the arguments. This is a byproduct of
using a hash but the fact that it is unintentional makes it no less
useful. Future maintainers of this code will be grateful for the
information.
Keyword arguments are also nice if you have a lot of parameters. Order is easy to get wrong. It may also make a nicer API in the opinion of the authors.
PEP-3102 also addresses this, but I find the rationale unsatisfying from the perspective of "why would I choose to design something like this"
The current Python function-calling paradigm allows arguments to be
specified either by position or by keyword. An argument can be filled
in either explicitly by name, or implicitly by position.
There are often cases where it is desirable for a function to take a
variable number of arguments. The Python language supports this using
the 'varargs' syntax (*name), which specifies that any 'left over'
arguments be passed into the varargs parameter as a tuple.
One limitation on this is that currently, all of the regular argument
slots must be filled before the vararg slot can be.
This is not always desirable. One can easily envision a function which
takes a variable number of arguments, but also takes one or more
'options' in the form of keyword arguments. Currently, the only way to
do this is to define both a varargs argument, and a 'keywords'
argument (**kwargs), and then manually extract the desired keywords
from the dictionary.
What is the use for keyword only parameters:
For some function, it is impossible to do otherwise (ex: print(a, b, end=''))
It prevents you from making silly mistakes, consider the following example:
# if it wasn't made with kw-only parameters, this would return 3
>>> sorted(3, 1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: sorted expected 1 arguments, got 2
>>> sorted((1,2), reverse=True)
[2, 1]
It allows you to change things later:
# if
def sorted(iterable, reverse=False)
# becomes
def sorted(iterable, key=None, reverse=False)
# you can guarantee backwards compatibility
First, a caveat that I can't know the intention of the person who wrote that. However, I can offer reason why “prevent positional parameters” might be desirable.
It's often important that a parameter be keyword-only, that is, it must be used only by name. The parameter is not conceptually an input to the function's purpose; it's more a modifier (change the behaviour in this way), or an external resource (here is the log file to emit your messages to), etc.
For that reason, Python 3 now allows you to define, in the signature of the function, specific parameters as keyword-only parameters. The change is documented in PEP 3102 Keyword-only arguments along with rationale.
I have started to learn python, and I would like to ask you about something which I considered a little magic in this language.
I would like to note that before learning python I worked with PHP and there I haven't noticed that.
What's going on - I have noticed that some call constructors or methods in Python are in this form.
object.call(variable1 = value1, variable2 = value2)
For example, in FLask:
app.run(debug=True, threaded=True)
Is any reason for this convention? Or is there some semantical reason outgoing from the language fundamentals? I haven't seen something like that in PHP as often as in Python and because I'm really surprised. I'm really curious if there is some magic or it's only convention to read code easier.
These are called keyword arguments, and they're usually used to make the call more readable.
They can also be used to pass the arguments in a different order from the declared parameters, or to skip over some default parameters but pass arguments to others, or because the function requires keyword arguments… but readability is the core reason for their existence.
Consider this:
app.run(True, False)
Do you have any idea what those two arguments mean? Even if you can guess that the only two reasonable arguments are threading and debugging flags, how can you guess which one comes first? The only way you can do it is to figure out what type app is, and check the app.run method's docstring or definition.
But here:
app.run(debug=True, threaded=False)
It's obvious what it means.
It's worth reading the FAQ What is the difference between arguments and parameters?, and the other tutorial sections near the one linked above. Then you can read the reference on Function definitions for full details on parameters and Calls for full details on arguments, and finally the inspect module documentation on kinds of parameters.
This blog post attempts to summarize everything in those references so you don't have to read your way through the whole mess. The examples at the end should also serve to show why mixing up arguments and parameters in general, keyword arguments and default parameters, argument unpacking and variable parameters, etc. will lead you astray.
Specifying arguments by keyword often creates less risk of error than specifying arguments solely by position. Consider this function to compute loan payments:
def pmt(principal, interest, term):
return **something**;
When one tries to compute the amortization of their house purchase, it might be invoked thus:
payment = pmt(100000, 4.2, 360)
But it is difficult to see which of those values should be associated with which parameter. Without checking the documentation, we might think it should have been:
payment = pmt(360, 4.2, 100000)
Using keyword parameters, the call becomes self-documenting:
payment = pmt(principal=100000, interest=4.2, term=360)
Additionally, keyword parameters allow you to change the order of the parameters at the call site, and everything still works correctly:
# Equivalent to previous example
payment = pmt(term=360, interest=4.2, principal=100000)
See http://docs.python.org/2/tutorial/controlflow.html#keyword-arguments for more information.
They are arguments passed by keywords. There is no semantical difference between keyword arguments and positional arguments.
They are often used like "options", and provide a much more readable syntax for this circumstance. Think of this:
>>> sorted([2,-1,3], key=lambda x: x**2, reverse=True)
[3, 2, -1]
Versus(python2):
>>> sorted([2,-1,3], None, lambda x: x**2, True)
[3, 2, -1]
In this second example can you tell what's the meaning of None or True?
Note that in keyword only arguments, i.e. arguments that you can only specify using this syntax, were introduced in python3. In python2 any argument can be specified by position(except when using **kwargs but that's another issue).
There is no "magic".
A function can take:
Positional arguments (args)
Keyworded arguments (kwargs)
Always is this order.
Try this:
def foo(*args, **kwargs):
print args
print kwargs
foo(1,2,3,4,a=8,b=12)
Output:
(1, 2, 3, 4)
{'a': 8, 'b': 12}
Python stores the positional arguments in a tuple, which has to be immutable, and the keyworded ones in a dictionary.
The main utility of the convention is that it allows for setting certain inputs when there may be some defaults in between. It's particularly useful when a function has many parameters, most of which work fine with their defaults, but a few need to be set to other values for the function to work as desired.
example:
def foo(i1, i2=1, i3=3, i4=5):
# does something
foo(1,2,3,4)
foo(1,2,i4=3)
foo(1,i2=3)
foo(0,i3=1,i2=3,i4=5)
I'm working on this project which deals with vectors in python. But I'm new to python and don't really know how to crack it. Here's the instruction:
"Add a constructor to the Vector class. The constructor should take a single argument. If this argument is either an int or a long or an instance of a class derived from one of these, then consider this argument to be the length of the Vector instance. In this case, construct a Vector of the specified length with each element is initialized to 0.0. If the length is negative, raise a ValueError with an appropriate message. If the argument is not considered to be the length, then if the argument is a sequence (such as a list), then initialize with vector with the length and values of the given sequence. If the argument is not used as the length of the vector and if it is not a sequence, then raise a TypeError with an appropriate message.
Next implement the __repr__ method to return a string of python code which could be used to initialize the Vector. This string of code should consist of the name of the class followed by an open parenthesis followed by the contents of the vector represented as a list followed by a close parenthesis."
I'm not sure how to do the class type checking, as well as how to initialize the vector based on the given object. Could someone please help me with this? Thanks!
Your instructor seems not to "speak Python as a native language". ;) The entire concept for the class is pretty silly; real Python programmers just use the built-in sequence types directly. But then, this sort of thing is normal for academic exercises, sadly...
Add a constructor to the Vector class.
In Python, the common "this is how you create a new object and say what it's an instance of" stuff is handled internally by default, and then the baby object is passed to the class' initialization method to make it into a "proper" instance, by setting the attributes that new instances of the class should have. We call that method __init__.
The constructor should take a single argument. If this argument is either an int or a long or an instance of a class derived from one of these
This is tested by using the builtin function isinstance. You can look it up for yourself in the documentation (or try help(isinstance) at the REPL).
In this case, construct a Vector of the specified length with each element is initialized to 0.0.
In our __init__, we generally just assign the starting values for attributes. The first parameter to __init__ is the new object we're initializing, which we usually call "self" so that people understand what we're doing. The rest of the arguments are whatever was passed when the caller requested an instance. In our case, we're always expecting exactly one argument. It might have different types and different meanings, so we should give it a generic name.
When we detect that the generic argument is an integer type with isinstance, we "construct" the vector by setting the appropriate data. We just assign to some attribute of self (call it whatever makes sense), and the value will be... well, what are you going to use to represent the vector's data internally? Hopefully you've already thought about this :)
If the length is negative, raise a ValueError with an appropriate message.
Oh, good point... we should check that before we try to construct our storage. Some of the obvious ways to do it would basically treat a negative number the same as zero. Other ways might raise an exception that we don't get to control.
If the argument is not considered to be the length, then if the argument is a sequence (such as a list), then initialize with vector with the length and values of the given sequence.
"Sequence" is a much fuzzier concept; lists and tuples and what-not don't have a "sequence" base class, so we can't easily check this with isinstance. (After all, someone could easily invent a new kind of sequence that we didn't think of). The easiest way to check if something is a sequence is to try to create an iterator for it, with the built-in iter function. This will already raise a fairly meaningful TypeError if the thing isn't iterable (try it!), so that makes the error handling easy - we just let it do its thing.
Assuming we got an iterator, we can easily create our storage: most sequence types (and I assume you have one of them in mind already, and that one is certainly included) will accept an iterator for their __init__ method and do the obvious thing of copying the sequence data.
Next implement the __repr__ method to return a string of python code which could be used to initialize the Vector. This string of code should consist of the name of the class followed by an open parenthesis followed by the contents of the vector represented as a list followed by a close parenthesis."
Hopefully this is self-explanatory. Hint: you should be able to simplify this by making use of the storage attribute's own __repr__. Also consider using string formatting to put the string together.
Everything you need to get started is here:
http://docs.python.org/library/functions.html
There are many examples of how to check types in Python on StackOverflow (see my comment for the top-rated one).
To initialize a class, use the __init__ method:
class Vector(object):
def __init__(self, sequence):
self._internal_list = list(sequence)
Now you can call:
my_vector = Vector([1, 2, 3])
And inside other functions in Vector, you can refer to self._internal_list. I put _ before the variable name to indicate that it shouldn't be changed from outside the class.
The documentation for the list function may be useful for you.
You can do the type checking with isinstance.
The initialization of a class with done with an __init__ method.
Good luck with your assignment :-)
This may or may not be appropriate depending on the homework, but in Python programming it's not very usual to explicitly check the type of an argument and change the behaviour based on that. It's more normal to just try to use the features you expect it to have (possibly catching exceptions if necessary to fall back to other options).
In this particular example, a normal Python programmer implementing a Vector that needed to work this way would try using the argument as if it were an integer/long (hint: what happens if you multiply a list by an integer?) to initialize the Vector and if that throws an exception try using it as if it were a sequence, and if that failed as well then you can throw a TypeError.
The reason for doing this is that it leaves your class open to working with other objects types people come up with later that aren't integers or sequences but work like them. In particular it's very difficult to comprehensively check whether something is a "sequence", because user-defined classes that can be used as sequences don't have to be instances of any common type you can check. The Vector class itself is quite a good candidate for using to initialize a Vector, for example!
But I'm not sure if this is the answer your teacher is expecting. If you haven't learned about exception handling yet, then you're almost certainly not meant to use this approach so please ignore my post. Good luck with your learning!
I have some functions in my code that accept either an object or an iterable of objects as input. I was taught to use meaningful names for everything, but I am not sure how to comply here. What should I call a parameter that can a sinlge object or an iterable of objects? I have come up with two ideas, but I don't like either of them:
FooOrManyFoos - This expresses what goes on, but I could imagine that someone not used to it could have trouble understanding what it means right away
param - Some generic name. This makes clear that it can be several things, but does explain nothing about what the parameter is used for.
Normally I call iterables of objects just the plural of what I would call a single object. I know this might seem a little bit compulsive, but Python is supposed to be (among others) about readability.
I have some functions in my code that accept either an object or an iterable of objects as input.
This is a very exceptional and often very bad thing to do. It's trivially avoidable.
i.e., pass [foo] instead of foo when calling this function.
The only time you can justify doing this is when (1) you have an installed base of software that expects one form (iterable or singleton) and (2) you have to expand it to support the other use case. So. You only do this when expanding an existing function that has an existing code base.
If this is new development, Do Not Do This.
I have come up with two ideas, but I don't like either of them:
[Only two?]
FooOrManyFoos - This expresses what goes on, but I could imagine that someone not used to it could have trouble understanding what it means right away
What? Are you saying you provide NO other documentation, and no other training? No support? No advice? Who is the "someone not used to it"? Talk to them. Don't assume or imagine things about them.
Also, don't use Leading Upper Case Names.
param - Some generic name. This makes clear that it can be several things, but does explain nothing about what the parameter is used for.
Terrible. Never. Do. This.
I looked in the Python library for examples. Most of the functions that do this have simple descriptions.
http://docs.python.org/library/functions.html#isinstance
isinstance(object, classinfo)
They call it "classinfo" and it can be a class or a tuple of classes.
You could do that, too.
You must consider the common use case and the exceptions. Follow the 80/20 rule.
80% of the time, you can replace this with an iterable and not have this problem.
In the remaining 20% of the cases, you have an installed base of software built around an assumption (either iterable or single item) and you need to add the other case. Don't change the name, just change the documentation. If it used to say "foo" it still says "foo" but you make it accept an iterable of "foo's" without making any change to the parameters. If it used to say "foo_list" or "foo_iter", then it still says "foo_list" or "foo_iter" but it will quietly tolerate a singleton without breaking.
80% of the code is the legacy ("foo" or "foo_list")
20% of the code is the new feature ("foo" can be an iterable or "foo_list" can be a single object.)
I guess I'm a little late to the party, but I'm suprised that nobody suggested a decorator.
def withmany(f):
def many(many_foos):
for foo in many_foos:
yield f(foo)
f.many = many
return f
#withmany
def process_foo(foo):
return foo + 1
processed_foo = process_foo(foo)
for processed_foo in process_foo.many(foos):
print processed_foo
I saw a similar pattern in one of Alex Martelli's posts but I don't remember the link off hand.
It sounds like you're agonizing over the ugliness of code like:
def ProcessWidget(widget_thing):
# Infer if we have a singleton instance and make it a
# length 1 list for consistency
if isinstance(widget_thing, WidgetType):
widget_thing = [widget_thing]
for widget in widget_thing:
#...
My suggestion is to avoid overloading your interface to handle two distinct cases. I tend to write code that favors re-use and clear naming of methods over clever dynamic use of parameters:
def ProcessOneWidget(widget):
#...
def ProcessManyWidgets(widgets):
for widget in widgets:
ProcessOneWidget(widget)
Often, I start with this simple pattern, but then have the opportunity to optimize the "Many" case when there are efficiencies to gain that offset the additional code complexity and partial duplication of functionality. If this convention seems overly verbose, one can opt for names like "ProcessWidget" and "ProcessWidgets", though the difference between the two is a single easily missed character.
You can use *args magic (varargs) to make your params always be iterable.
Pass a single item or multiple known items as normal function args like func(arg1, arg2, ...) and pass iterable arguments with an asterisk before, like func(*args)
Example:
# magic *args function
def foo(*args):
print args
# many ways to call it
foo(1)
foo(1, 2, 3)
args1 = (1, 2, 3)
args2 = [1, 2, 3]
args3 = iter((1, 2, 3))
foo(*args1)
foo(*args2)
foo(*args3)
Can you name your parameter in a very high-level way? people who read the code are more interested in knowing what the parameter represents ("clients") than what their type is ("list_of_tuples"); the type can be defined in the function documentation string, which is a good thing since it might change, in the future (the type is sometimes an implementation detail).
I would do 1 thing,
def myFunc(manyFoos):
if not type(manyFoos) in (list,tuple):
manyFoos = [manyFoos]
#do stuff here
so then you don't need to worry anymore about its name.
in a function you should try to achieve to have 1 action, accept the same parameter type and return the same type.
Instead of filling the functions with ifs you could have 2 functions.
Since you don't care exactly what kind of iterable you get, you could try to get an iterator for the parameter using iter(). If iter() raises a TypeError exception, the parameter is not iterable, so you then create a list or tuple of the one item, which is iterable and Bob's your uncle.
def doIt(foos):
try:
iter(foos)
except TypeError:
foos = [foos]
for foo in foos:
pass # do something here
The only problem with this approach is if foo is a string. A string is iterable, so passing in a single string rather than a list of strings will result in iterating over the characters in a string. If this is a concern, you could add an if test for it. At this point it's getting wordy for boilerplate code, so I'd break it out into its own function.
def iterfy(iterable):
if isinstance(iterable, basestring):
iterable = [iterable]
try:
iter(iterable)
except TypeError:
iterable = [iterable]
return iterable
def doIt(foos):
for foo in iterfy(foos):
pass # do something
Unlike some of those answering, I like doing this, since it eliminates one thing the caller could get wrong when using your API. "Be conservative in what you generate but liberal in what you accept."
To answer your original question, i.e. what you should name the parameter, I would still go with "foos" even though you will accept a single item, since your intent is to accept a list. If it's not iterable, that is technically a mistake, albeit one you will correct for the caller since processing just the one item is probably what they want. Also, if the caller thinks they must pass in an iterable even of one item, well, that will of course work fine and requires very little syntax, so why worry about correcting their misapprehension?
I would go with a name explaining that the parameter can be an instance or a list of instances. Say one_or_more_Foo_objects. I find it better than the bland param.
I'm working on a fairly big project now and we're passing maps around and just calling our parameter map. The map contents vary depending on the function that's being called. This probably isn't the best situation, but we reuse a lot of the same code on the maps, so copying and pasting is easier.
I would say instead of naming it what it is, you should name it what it's used for. Also, just be careful that you can't call use in on a not iterable.