Why does tf.variable_scope has a default_name argument? - python

The first two arguments of tf.variable_scope's __init__ method are
name_or_scope: string or VariableScope: the scope to open.
default_name: The default name to use if the name_or_scope argument is
None, this name will be uniquified. If name_or_scope is provided it
won't be used and therefore it is not required and can be None.
If I understand correctly, this argument is equivalent to (and therefore could be easily replaced with)
if name_or_scope is None:
name_or_scope = default_name
with tf.variable_scope(name_or_scope, ...):
...
Now, I am not sure I understand why it was deemed necessary to have this special treatment for the scope name — after all, many parameters could use a parameterizable default argument.
So what is the rationale behind the introduction of this argument?

You are right. It is just a convenience.
Take the case of TensorFlow models defined here. If you take a specific look at InceptionV4.py, you will see that it has a scope argument in its definition. Just below you will see that InceptionV4 has been passed as a default scope. Therefore it was entirely not required to even has a scope argument in the definition. But it makes sense, if somebody gives scope=None.
Think about it. Model definitions can get very comples very quickly. Therefore, a default_scope argument, helps in reinforcing the wisdom of the model definition writer to introduce some sort of deliberate structure in the model definition, even if the end user is very naive about it.

Related

Reassignment of 'self' in a method: Python 3.8

In a method of a class:
def weight(self, grid):
...
if self.is_vertical:
self = self.T
...
I'd like to reassign self to its transposed value if the condition is true. Depending on the if-statement, I'd like to use self in a method later in its original or transposed condition.
As I understand, in a method, self is just a parameter name, but not the real reference or a pointer to an instance of a class, as in C++ or similar languages, so I can freely reassign it for use inside the scope of a method.
My question is why PyCharm's Inspection Info warns me that
...first parameter, such as 'self' or 'cls', is reassigned in a method. In most cases imaginable, there's no point in such reassignment, and it indicates an error.
while it works fine? Why does it indicates an error?
Why does it indicates an error?
Because there's no good reason to do it. Nothing uses the self variable automatically, so there's no need to reassign it. If you need a variable that sometimes refers to the object that the method was called on but could also hold some other value, use a different variable for this.
def mymethod(self):
cur = self
...
cur = cur.T
...
Note that self is just a local variable within this method. Reassigning self doesn't have any effect on the object itself, or the variable that the method was called on. It's practically useless to do this, so it almost always indicates that the programmer was confused. That's why Pycharm warns about it.
Since everyone expects self to refer to the object that the method was called on, reassigning it will also be confusing to other programmers. When working on code later in the method, they may not realize that self might not refer to that object. Imaging trying to have a conversation with someone who says "From now on, whenever I say 'me' or 'I', I actually mean that guy over there."
This is just the flip side of why we have the self and cls naming convention in the first place. As far as Python is concerned, you can use any name for the first parameter of a method. But we recommend everyone use these names so that when we read each others' code, we won't have to remember what variable refers to the current object in each method.
Python itself doesn't care, it won't cause an error message there.

**kwargs.pop(x) versus defining x as a function parameter

Lets say you're writing a child class that has a constructor that passes its unused kwargs up to the parent constructor, but your class has the argument x that it needs to store that shouldn't be passed to the parent.
I have seen two different approaches to this:
def __init__(self, **kwargs):
self.x = kwargs.pop('x', 'default')
super().__init__(**kwargs)
and
def __init__(self, x='default', **kwargs):
self.x = x
super().__init__(**kwargs)
Is there every any functional difference between these two constructors? Is there any reason to use one over the other?
The only difference I can see is that the second form, which defines x in the signature, allows the user to better see it as a possible argument, or an IDE to offer it as an autocomplete option. Or in Python 3.5+, you could add a type annotation to x. Does that make the first form objectively worse?
As already mentionned by Giacomo Alzetta in a comment, the second version allow to pass x as a positional argument when the first only allow named arguments, IOW with the second form you can use both Child(x=2) AND Child(2), while the first only supports Child(x=2).
Also, when using inspection to check the method's signature, the second form will clearly mention the existance of the x param, while the first won't.
And finally, the second version will yield a slightly clearer exception if x is not passed.
And that's for the functional differences.
Is there any reason to use one over the other?
Well... As a general rule, it's cleaner (best practice) to use explicit arguments whenever possible, even if only for readability, and from experience it does usually make maintenance easier indeed. So from this point of view, the second form can be seen as "objectively better" than the first.
This being said, when the parent method has dozens of mostly optional and rarely used arguments (django.forms.Form, I'm lookig at you) AND you also want to preserve positional arguments order, it can be convenient to just use the generic *args, **kwargs signature for the child and force the additional param(s) to be passed as kwargs. Assuming you clearly document this in the docstring, it's still explicit enough (as far as I'm concerned, YMMV), and also avoids a lot of clutter (you can have a look at django.forms.Form for a concrete example of what I mean here).
So as always with "best practices" and other golden rules, you have to understand and weight the pros and cons wrt/ the concrete case at hand.
PS: just to make things clear, django's Form class signature makes perfect sense so I'm not ranting here - it's just one of those cases where there's no "beautiful" solution to the problem, period.
Aside obvious differences in code clarity, there might be a little difference in speed of calling the function, in this case method init().
If you can, define all necessary arguments with default values if you have some, in both methods, and pass them classically, and exclude ones you do not wish.
In this way you make the code clear and speed of calls stays the same.
If you need some micro-optimization, then use timeit to check what works faster.
I expect that one with the "x" added as an argument will perhaps be a winner.
Because getting to its value directly from local variables will be faster and kwargs dict() is smaller.
When you use "normal" arguments, they are automatically inserted into the functions local variables dictionary.
When you use *args and/or **kwargs they are additional tuple() and/or dict() added as new local variables. They are first created from the arguments you passed into the function call.
When you are passing them to a next function, they are extracted
to match that function's call signature. In both operations you lose a tiny bit of speed.
If you add removing something from the kwargs dictionary, ( x = kwargs.pop("x") ), you also lose some speed.
By observing both codes, it seems that their call speed would be equal. But you should check. If you do not need an extra 0.000001 seconds when initializing your instances, then both options are fine and just choose what you like most.
But again, if you are free to do it, and if it will not greatly impair the code's maintenance, define all arguments and their default values and pass them on one-by-one.

Why would an API author prevent positional parameters in Python?

Implementing some Neural Network with tensorflow, I've faced a method which parameters have took my attention. I'm talking about tf.nn.sigmoid_cross_entropy_with_logits (Documentation here).
The first parameter it receives as first parameter _sentinel=None which, according to the documentation:
_sentinel: Used to prevent positional parameters. Internal, do not use.
I understand that by having this parameter, next ones have to be named instead of positional is this one don't have to be used, but my question is. In which cases does prevent positional parameters have some benefit? What is their main goal to use this? Because I could also run
tf.nn.sigmoid_cross_entropy_with_logits(None, my_labels, my_logits)
being all arguments positional. Anyway, I want to clarify that my question is not focused in TensorFlow, it's just the example that I have found.
Positional parameters couple the caller and receiver on the order of the parameters. It makes refactoring the order of the reciver's parameters more difficult.
For example, if I have
def foo(a, b, c):
do_stuff(a,b,c)
and I decide, for reasons, perhaps I want to make a partial function or whatever, that it would be better to have
def foo(b, a, c):
do_stuff(a,b,c)
But now I have callers in the wild and it would be very rude to change my contract, so I'm stuck.
Sandi Metz in Practical Object-Oriented Design in Ruby also addresses this. (I know this is python, but oop is oop)
When the code [is changed to use keyword arguments], it lost its dependency
on argument order but it gained a dependency on the names of the keys
in the [keyword arguments]. This change is healthy. The new dependency is
more stable than the old, and thus this code faces less risk of being
forced to change. Additionally, and perhaps unexpectedly, the [keywords]
provides one new, secondary benefit: The key names in the hash furnish
explicit documentation about the arguments. This is a byproduct of
using a hash but the fact that it is unintentional makes it no less
useful. Future maintainers of this code will be grateful for the
information.
Keyword arguments are also nice if you have a lot of parameters. Order is easy to get wrong. It may also make a nicer API in the opinion of the authors.
PEP-3102 also addresses this, but I find the rationale unsatisfying from the perspective of "why would I choose to design something like this"
The current Python function-calling paradigm allows arguments to be
specified either by position or by keyword. An argument can be filled
in either explicitly by name, or implicitly by position.
There are often cases where it is desirable for a function to take a
variable number of arguments. The Python language supports this using
the 'varargs' syntax (*name), which specifies that any 'left over'
arguments be passed into the varargs parameter as a tuple.
One limitation on this is that currently, all of the regular argument
slots must be filled before the vararg slot can be.
This is not always desirable. One can easily envision a function which
takes a variable number of arguments, but also takes one or more
'options' in the form of keyword arguments. Currently, the only way to
do this is to define both a varargs argument, and a 'keywords'
argument (**kwargs), and then manually extract the desired keywords
from the dictionary.
What is the use for keyword only parameters:
For some function, it is impossible to do otherwise (ex: print(a, b, end=''))
It prevents you from making silly mistakes, consider the following example:
# if it wasn't made with kw-only parameters, this would return 3
>>> sorted(3, 1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: sorted expected 1 arguments, got 2
>>> sorted((1,2), reverse=True)
[2, 1]
It allows you to change things later:
# if
def sorted(iterable, reverse=False)
# becomes
def sorted(iterable, key=None, reverse=False)
# you can guarantee backwards compatibility
First, a caveat that I can't know the intention of the person who wrote that. However, I can offer reason why “prevent positional parameters” might be desirable.
It's often important that a parameter be keyword-only, that is, it must be used only by name. The parameter is not conceptually an input to the function's purpose; it's more a modifier (change the behaviour in this way), or an external resource (here is the log file to emit your messages to), etc.
For that reason, Python 3 now allows you to define, in the signature of the function, specific parameters as keyword-only parameters. The change is documented in PEP 3102 Keyword-only arguments along with rationale.

Why method accepts class name and name 'object' as an argument?

Consider the following code, I expected it to generate error. But it worked. mydef1(self) should only be invoked with instance of MyClass1 as an argument, but it is accepting MyClass1 as well as rather vague object as instance.
Can someone explain why mydef is accepting class name(MyClass1) and object as argument?
class MyClass1:
def mydef1(self):
return "Hello"
print(MyClass1.mydef1(MyClass1))
print(MyClass1.mydef1(object))
Output
Hello
Hello
There are several parts to the answer to your question because your question signals confusion about a few different aspects of Python.
First, type names are not special in Python. They're just another variable. You can even do something like object = 5 and cause all kinds of confusion.
Secondly, the self parameter is just that, a parameter. When you say MyClass1.mydef1 you're asking for the value of the variable with the name mydef1 inside the variable (that's a module, or class, or something else that defines the __getattr__ method) MyClass1. You get back a function that takes one argument.
If you had done this:
aVar = MyClass1()
aVar.mydef1(object)
it would've failed. When Python gets a method from an instance of a class, the instance's __getattr__ method has special magic to bind the first argument to the same object the method was retrieved from. It then returns the bound method, which now takes one less argument.
I would recommend fiddling around in the interpreter and type in your MyClass1 definition, then type in MyClass1.mydef1 and aVar = MyClass1(); aVar.mydef1 and observe the difference in the results.
If you come from a language like C++ or Java, this can all seem very confusing. But, it's actually a very regular and logical structure. Everything works the same way.
Also, as people have pointed out, names have no type associated with them. The type is associated with the object the name references. So any name can reference any kind of thing. This is also referred to as 'dynamic typing'. Python is dynamically typed in another way as well. You can actually mess around with the internal structure of something and change the type of an object as well. This is fairly deep magic, and I wouldn't suggest doing it until you know what you're doing. And even then you shouldn't do it as it will just confuse everybody else.
Python is dynamically typed, so it doesn't care what gets passed. It only cares that the single required parameter gets an argument as a value. Once inside the function, you never use self, so it doesn't matter what the argument was; you can't misuse what you don't use in the first place.
This question only arises because you are taking the uncommon action of running an instance method as an unbound method with an explicit argument, rather than invoking it on an instance of the class and letting the Python runtime system take care of passing that instance as the first argument to mydef1: MyClass().mydef1() == MyClass.mydef1(MyClass()).
Python is not a statically-typed language, so you can pass to any function any objects of any data types as long as you pass in the right number of parameters, and the self argument in a class method is no different from arguments in any other function.
There is no problem with that whatsoever - self is an object like any other and may be used in any context where object of its type/behavior would be welcome.
Python - Is it okay to pass self to an external function

What is the point of _=None in method python method signature?

What is the purpose of _=None in a method signature?
Example:
def method(_=None):
pass
_ is a conventional name for an unused placeholder variable in Python (and shell, and some other languages).
When not in the context of a class (that is, when defining a function rather than a method), this would define a function with a single optional (keyword) argument (defaulting to None), with a name indicating that that argument is deliberately unused (and exempting it from warnings about unused variables in some static-checking tools).
When defining a method where the first and only argument accepted is defined in this way, this becomes effectively a poor man's static method. That is to say, it's indicating that self is unused and optional, and allowing the method to be called independently of whether any object instance is associated (that is, whether a self is actually available at runtime).
Note that this is not common idiom, and using the #staticmethod decorator is going to make much more sense to readers of your code.

Categories

Resources