I would like to get some tips on the Pythonic way to validate the arguments when creating an instance of a class. I have a hard time understanding the proper usage of the __new__ method and maybe this is one of its usages? Say for example that i have class Test that takes in two arguments a and b. If, for example, i want to ensure that both must be integers and that b must be greater than a, i could do as follows:
class Test:
def __init__(self, a, b):
if not (isinstance(a,int) and isinstance(b,int)):
raise Exception("bla bla error 1")
if not b > a:
raise Exception("bla bla error 2")
self.a = a
self.b = b
#.....
or, i could do as follows:
def validate_test_input(a,b):
if not (isinstance(a, int) and isinstance(b, int)):
raise Exception("bla bla error 1")
if not b > a:
raise Exception("bla bla error 2")
class Test:
def __init__(self, a, b):
validate_test_input(a,b)
self.a = a
self.b = b
#.....
What would you do? Is there any convention on data validation ? Should dunder new method be used here? If , so please show an example.
First snippet is almost perfectly fine. Unless this logic is reused in several places in your code base I would avoid the second snippet because it decouples the logic from the class.
I would just do some small semantic changes.
Raise proper exception types, ie TypeError and ValueError
Rephrase the conditions to be more readable (you may disagree as this is quite subjective)
Of course provide a useful text instead of "bla bla", but I trust you with that one ;)
class Test:
def __init__(self, a, b):
if not isinstance(a, int) or not isinstance(b, int):
raise TypeError("bla bla error 1")
if a <= b:
raise ValueError("bla bla error 2")
self.a = a
self.b = b
#.....
Some may find the original if not (isinstance(a, int) and isinstance(b, int)) to be more readable than what I suggested and I will not disagree. Same goes for if a <= b:. It depends if you prefer to stress the condition you want to be true or the condition you want to be false.
In this case, since we are raising an exception I prefer to stress the condition we want to be false.
If this code is at development, I would maybe do that, which is not very different from your code:
class Test:
def __init__(self, a, b):
assert isinstance(a,int) and isinstance(b,int), "bla bla error 1"
assert b > a, "bla bla error 2"
self.a = a
self.b = b
#.....
And if I need this control when I will release that code (for example, if it is a library) I would convert asserts to raise, then raise a TypeError and a ValueError:
class Test:
def __init__(self, a, b):
if not (isinstance(a,int) and isinstance(b,int)):
raise TypeError("bla bla error 1")
if not b > a:
raise ValueError("bla bla error 2")
self.a = a
self.b = b
#.....
So your code is the true way to go.
In the case of __new__ magic method, today I found a good example in builtin turtle library. In the definition of Vec2D class:
class Vec2D(tuple):
"""A 2 dimensional vector class, used as a helper class
for implementing turtle graphics.
May be useful for turtle graphics programs also.
Derived from tuple, so a vector is a tuple!
Provides (for a, b vectors, k number):
a+b vector addition
a-b vector subtraction
a*b inner product
k*a and a*k multiplication with scalar
|a| absolute value of a
a.rotate(angle) rotation
"""
def __new__(cls, x, y):
return tuple.__new__(cls, (x, y))
...
As you know, tuple takes an argument which is iterable. Developers of this module probably wanted to change it, so they defined __new__ as (cls, x, y), and then they called tuple.__new__ as (cls, (x, y)). The cls in here is the class which is instanced. For more information, look at here: Calling __new__ when making a subclass of tuple
The way you do it in the first code snippet is okay, but as python is quite versatile with what you can pass to a function or a class, there is much more to check if you go that way that mere argument types.
Duck typing makes checking argument types less reliable: a provided object could comply with what a function or a constructor need but not derive from some known class.
You could also want to check arguments names or such things.
The most common style is rather not testing inputs and consider the caller is safe (in internal code) and use some dedicated module like zope.interface for interactions with external world.
To make things lighter on a syntaxic POV interface checking is also typically done using decorators.
PS: the '__new__' method is about metaclass and used to solve issue related to objects allocation. Definitely unrelated to interface checks.
Related
I am trying to modify an already defined class by changing an attribute's value. Importantly, I want this change to propagate internally.
For example, consider this class:
class Base:
x = 1
y = 2 * x
# Other attributes and methods might follow
assert Base.x == 1
assert Base.y == 2
I would like to change x to 2, making it equivalent to this.
class Base:
x = 2
y = 2 * x
assert Base.x == 2
assert Base.y == 4
But I would like to make it in the following way:
Base = injector(Base, x=2)
Is there a way to achieve this WITHOUT recompile the original class source code?
The effect you want to achieve belongs to the realm of "reactive programing" - a programing paradigm (from were the now ubiquitous Javascript library got its name as an inspiration).
While Python has a lot of mechanisms to allow that, one needs to write his code to actually make use of these mechanisms.
By default, plain Python code as the one you put in your example, uses the Imperative paradigm, which is eager: whenever an expression is encoutered, it is executed, and the result of that expression is used (in this case, the result is stored in the class attribute).
Python's advantages also can make it so that once you write a codebase that will allow some reactive code to take place, users of your codebase don't have to be aware of that, and things work more or less "magically".
But, as stated above, that is not free. For the case of being able to redefine y when x changes in
class Base:
x = 1
y = 2 * x
There are a couple paths that can be followed - the most important is that, at the time the "*" operator is executed (and that happens when Python is parsing the class body), at least one side of the operation is not a plain number anymore, but a special object which implements a custom __mul__ method (or __rmul__) in this case. Then, instead of storing a resulting number in y, the expression is stored somewhere, and when y is retrieved either as a class attribute, other mechanisms force the expression to resolve.
If you want this at instance level, rather than at class level, it would be easier to implement. But keep in mind that you'd have to define each operator on your special "source" class for primitive values.
Also, both this and the easier, instance descriptor approach using property are "lazily evaluated": that means, the value for y is calcualted when it is to be used (it can be cached if it will be used more than once). If you want to evaluate it whenever x is assigned (and not when y is consumed), that will require other mechanisms. Although caching the lazy approach can mitigate the need for eager evaluation to the point it should not be needed.
1 - Before digging there
Python's easiest way to do code like this is simply to write the expressions to be calculated as functions - and use the property built-in as a descriptor to retrieve these values. The drawback is small:
you just have to wrap your expressions in a function (and then, that function
in something that will add the descriptor properties to it, such as property). The gain is huge: you are free to use any Python code inside your expression, including function calls, object instantiation, I/O, and the like. (Note that the other approach requires wiring up each desired operator, just to get started).
The plain "101" approach to have what you want working for instances of Base is:
class Base:
x = 1
#property
def y(self):
return self.x * 2
b = Base()
b.y
-> 2
Base.x = 3
b.y
-> 6
The work of property can be rewritten so that retrieving y from the class, instead of an instance, achieves the effect as well (this is still easier than the other approach).
If this will work for you somehow, I'd recommend doing it. If you need to cache y's value until x actually changes, that can be done with normal coding
2 - Exactly what you asked for, with a metaclass
as stated above, Python'd need to know about the special status of your y attribute when calculcating its expression 2 * x. At assignment time, it would be already too late.
Fortunately Python 3 allow class bodies to run in a custom namespace for the attribute assignment by implementing the __prepare__ method in a metaclass, and then recording all that takes place, and replacing primitive attributes of interest by special crafted objects implementing __mul__ and other special methods.
Going this way could even allow values to be eagerly calculated, so they can work as plain Python objects, but register information so that a special injector function could recreate the class redoing all the attributes that depend on expressions. It could also implement lazy evaluation, somewhat as described above.
from collections import UserDict
import operator
class Reactive:
def __init__(self, value):
self._initial_value = value
self.values = {}
def __set_name__(self, owner, name):
self.name = name
self.values[owner] = self._initial_value
def __get__(self, instance, owner):
return self.values[owner]
def __set__(self, instance, value):
raise AttributeError("value can't be set directly - call 'injector' to change this value")
def value(self, cls=None):
return self.values.get(cls, self._initial_value)
op1 = value
#property
def result(self):
return self.value
# dynamically populate magic methods for operation overloading:
for name in "mul add sub truediv pow contains".split():
op = getattr(operator, name)
locals()[f"__{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(self, other, operator)))(op)
locals()[f"__r{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(other, self, operator)))(op)
class ReactiveExpr(Reactive):
def __init__(self, value, op2, operator):
self.op2 = op2
self.operator = operator
super().__init__(value)
def result(self, cls):
op1, op2 = self.op1(cls), self.op2
if isinstance(op1, Reactive):
op1 = op1.result(cls)
if isinstance(op2, Reactive):
op2 = op2.result(cls)
return self.operator(op1, op2)
def __get__(self, instance, owner):
return self.result(owner)
class AuxDict(UserDict):
def __init__(self, *args, _parent, **kwargs):
self.parent = _parent
super().__init__(*args, **kwargs)
def __setitem__(self, item, value):
if isinstance(value, self.parent.reacttypes) and not item.startswith("_"):
value = Reactive(value)
super().__setitem__(item, value)
class MetaReact(type):
reacttypes = (int, float, str, bytes, list, tuple, dict)
def __prepare__(*args, **kwargs):
return AuxDict(_parent=__class__)
def __new__(mcls, name, bases, ns, **kwargs):
pre_registry = {}
cls = super().__new__(mcls, name, bases, ns.data, **kwargs)
#for name, obj in ns.items():
#if isinstance(obj, ReactiveExpr):
#pre_registry[name] = obj
#setattr(cls, name, obj.result()
for name, reactive in pre_registry.items():
_registry[cls, name] = reactive
return cls
def injector(cls, inplace=False, **kwargs):
original = cls
if not inplace:
cls = type(cls.__name__, (cls.__bases__), dict(cls.__dict__))
for name, attr in cls.__dict__.items():
if isinstance(attr, Reactive):
if isinstance(attr, ReactiveExpr) and name in kwargs:
raise AttributeError("Expression attributes can't be modified by injector")
attr.values[cls] = kwargs.get(name, attr.values[original])
return cls
class Base(metaclass=MetaReact):
x = 1
y = 2 * x
And, after pasting the snippet above in a REPL, here is the
result of using injector:
In [97]: Base2 = injector(Base, x=5)
In [98]: Base2.y
Out[98]: 10
The idea is complicated with that aspect that Base class is declared with dependent dynamically evaluated attributes. While we can inspect class's static attributes, I think there's no other way of getting dynamic expression except for parsing the class's sourcecode, find and replace the "injected" attribute name with its value and exec/eval the definition again. But that's the way you wanted to avoid. (moreover: if you expected injector to be unified for all classes).
If you want to proceed to rely on dynamically evaluated attributes define the dependent attribute as a lambda function.
class Base:
x = 1
y = lambda: 2 * Base.x
Base.x = 2
print(Base.y()) # 4
I am trying to implement method overloading in Python:
class A:
def stackoverflow(self):
print 'first method'
def stackoverflow(self, i):
print 'second method', i
ob=A()
ob.stackoverflow(2)
but the output is second method 2; similarly:
class A:
def stackoverflow(self):
print 'first method'
def stackoverflow(self, i):
print 'second method', i
ob=A()
ob.stackoverflow()
gives
Traceback (most recent call last):
File "my.py", line 9, in <module>
ob.stackoverflow()
TypeError: stackoverflow() takes exactly 2 arguments (1 given)
How do I make this work?
It's method overloading, not method overriding. And in Python, you historically do it all in one function:
class A:
def stackoverflow(self, i='some_default_value'):
print('only method')
ob=A()
ob.stackoverflow(2)
ob.stackoverflow()
See the Default Argument Values section of the Python tutorial. See "Least Astonishment" and the Mutable Default Argument for a common mistake to avoid.
See PEP 443 for information about the single dispatch generic functions added in Python 3.4:
>>> from functools import singledispatch
>>> #singledispatch
... def fun(arg, verbose=False):
... if verbose:
... print("Let me just say,", end=" ")
... print(arg)
>>> #fun.register(int)
... def _(arg, verbose=False):
... if verbose:
... print("Strength in numbers, eh?", end=" ")
... print(arg)
...
>>> #fun.register(list)
... def _(arg, verbose=False):
... if verbose:
... print("Enumerate this:")
... for i, elem in enumerate(arg):
... print(i, elem)
You can also use pythonlangutil:
from pythonlangutil.overload import Overload, signature
class A:
#Overload
#signature()
def stackoverflow(self):
print('first method')
#stackoverflow.overload
#signature("int")
def stackoverflow(self, i):
print('second method', i)
While agf was right with the answer in the past, pre-3.4, now with PEP-3124 we got our syntactic sugar.
See typing documentation for details on the #overload decorator, but note that this is really just syntactic sugar and IMHO this is all people have been arguing about ever since.
Personally, I agree that having multiple functions with different signatures makes it more readable then having a single function with 20+ arguments all set to a default value (None most of the time) and then having to fiddle around using endless if, elif, else chains to find out what the caller actually wants our function to do with the provided set of arguments. This was long overdue following the Python Zen:
Beautiful is better than ugly.
and arguably also
Simple is better than complex.
Straight from the official Python documentation linked above:
from typing import overload
#overload
def process(response: None) -> None:
...
#overload
def process(response: int) -> Tuple[int, str]:
...
#overload
def process(response: bytes) -> str:
...
def process(response):
<actual implementation>
EDIT: for anyone wondering why this example is not working as you'd expect if from other languages I'd suggest to take a look at this discussion. The #overloaded functions are not supposed to have any actual implementation. This is not obvious from the example in the Python documentation.
In Python, you don't do things that way. When people do that in languages like Java, they generally want a default value (if they don't, they generally want a method with a different name). So, in Python, you can have default values.
class A(object): # Remember the ``object`` bit when working in Python 2.x
def stackoverflow(self, i=None):
if i is None:
print 'first form'
else:
print 'second form'
As you can see, you can use this to trigger separate behaviour rather than merely having a default value.
>>> ob = A()
>>> ob.stackoverflow()
first form
>>> ob.stackoverflow(2)
second form
You can't, never need to and don't really want to.
In Python, everything is an object. Classes are things, so they are objects. So are methods.
There is an object called A which is a class. It has an attribute called stackoverflow. It can only have one such attribute.
When you write def stackoverflow(...): ..., what happens is that you create an object which is the method, and assign it to the stackoverflow attribute of A. If you write two definitions, the second one replaces the first, the same way that assignment always behaves.
You furthermore do not want to write code that does the wilder of the sorts of things that overloading is sometimes used for. That's not how the language works.
Instead of trying to define a separate function for each type of thing you could be given (which makes little sense since you don't specify types for function parameters anyway), stop worrying about what things are and start thinking about what they can do.
You not only can't write a separate one to handle a tuple vs. a list, but also don't want or need to.
All you do is take advantage of the fact that they are both, for example, iterable (i.e. you can write for element in container:). (The fact that they aren't directly related by inheritance is irrelevant.)
I write my answer in Python 3.2.1.
def overload(*functions):
return lambda *args, **kwargs: functions[len(args)](*args, **kwargs)
How it works:
overload takes any amount of callables and stores them in tuple functions, then returns lambda.
The lambda takes any amount of arguments,
then returns result of calling function stored in functions[number_of_unnamed_args_passed] called with arguments passed to the lambda.
Usage:
class A:
stackoverflow=overload( \
None, \
#there is always a self argument, so this should never get called
lambda self: print('First method'), \
lambda self, i: print('Second method', i) \
)
I think the word you're looking for is "overloading". There isn't any method overloading in Python. You can however use default arguments, as follows.
def stackoverflow(self, i=None):
if i != None:
print 'second method', i
else:
print 'first method'
When you pass it an argument, it will follow the logic of the first condition and execute the first print statement. When you pass it no arguments, it will go into the else condition and execute the second print statement.
I write my answer in Python 2.7:
In Python, method overloading is not possible; if you really want access the same function with different features, I suggest you to go for method overriding.
class Base(): # Base class
'''def add(self,a,b):
s=a+b
print s'''
def add(self,a,b,c):
self.a=a
self.b=b
self.c=c
sum =a+b+c
print sum
class Derived(Base): # Derived class
def add(self,a,b): # overriding method
sum=a+b
print sum
add_fun_1=Base() #instance creation for Base class
add_fun_2=Derived()#instance creation for Derived class
add_fun_1.add(4,2,5) # function with 3 arguments
add_fun_2.add(4,2) # function with 2 arguments
In Python, overloading is not an applied concept. However, if you are trying to create a case where, for instance, you want one initializer to be performed if passed an argument of type foo and another initializer for an argument of type bar then, since everything in Python is handled as object, you can check the name of the passed object's class type and write conditional handling based on that.
class A:
def __init__(self, arg)
# Get the Argument's class type as a String
argClass = arg.__class__.__name__
if argClass == 'foo':
print 'Arg is of type "foo"'
...
elif argClass == 'bar':
print 'Arg is of type "bar"'
...
else
print 'Arg is of a different type'
...
This concept can be applied to multiple different scenarios through different methods as needed.
In Python, you'd do this with a default argument.
class A:
def stackoverflow(self, i=None):
if i == None:
print 'first method'
else:
print 'second method',i
Python does not support method overloading like Java or C++. We may overload the methods, but we can only use the latest defined method.
# First sum method.
# Takes two argument and print their sum
def sum(a, b):
s = a + b
print(s)
# Second sum method
# Takes three argument and print their sum
def sum(a, b, c):
s = a + b + c
print(s)
# Uncommenting the below line shows an error
# sum(4, 5)
# This line will call the second sum method
sum(4, 5, 5)
We need to provide optional arguments or *args in order to provide a different number of arguments on calling.
Courtesy Python | Method Overloading
I just came across overloading.py (function overloading for Python 3) for anybody who may be interested.
From the linked repository's README file:
overloading is a module that provides function dispatching based on
the types and number of runtime arguments.
When an overloaded function is invoked, the dispatcher compares the
supplied arguments to available function signatures and calls the
implementation that provides the most accurate match.
Features
Function validation upon registration and detailed resolution rules
guarantee a unique, well-defined outcome at runtime. Implements
function resolution caching for great performance. Supports optional
parameters (default values) in function signatures. Evaluates both
positional and keyword arguments when resolving the best match.
Supports fallback functions and execution of shared code. Supports
argument polymorphism. Supports classes and inheritance, including
classmethods and staticmethods.
Python 3.x includes standard typing library which allows for method overloading with the use of #overload decorator. Unfortunately, this is to make the code more readable, as the #overload decorated methods will need to be followed by a non-decorated method that handles different arguments.
More can be found here here but for your example:
from typing import overload
from typing import Any, Optional
class A(object):
#overload
def stackoverflow(self) -> None:
print('first method')
#overload
def stackoverflow(self, i: Any) -> None:
print('second method', i)
def stackoverflow(self, i: Optional[Any] = None) -> None:
if not i:
print('first method')
else:
print('second method', i)
ob=A()
ob.stackoverflow(2)
Python added the #overload decorator with PEP-3124 to provide syntactic sugar for overloading via type inspection - instead of just working with overwriting.
Code example on overloading via #overload from PEP-3124
from overloading import overload
from collections import Iterable
def flatten(ob):
"""Flatten an object to its component iterables"""
yield ob
#overload
def flatten(ob: Iterable):
for o in ob:
for ob in flatten(o):
yield ob
#overload
def flatten(ob: basestring):
yield ob
is transformed by the #overload-decorator to:
def flatten(ob):
if isinstance(ob, basestring) or not isinstance(ob, Iterable):
yield ob
else:
for o in ob:
for ob in flatten(o):
yield ob
In the MathMethod.py file:
from multipledispatch import dispatch
#dispatch(int, int)
def Add(a, b):
return a + b
#dispatch(int, int, int)
def Add(a, b, c):
return a + b + c
#dispatch(int, int, int, int)
def Add(a, b, c, d):
return a + b + c + d
In the Main.py file
import MathMethod as MM
print(MM.Add(200, 1000, 1000, 200))
We can overload the method by using multipledispatch.
There are some libraries that make this easy:
functools - if you only need the first argument use #singledispatch
plum-dispatch - feature rich method/function overloading.
multipledispatch - alternative to plum less features but lightweight.
python 3.5 added the typing module. This included an overload decorator.
This decorator's intended purpose it to help type checkers. Functionally its just duck typing.
from typing import Optional, overload
#overload
def foo(index: int) -> str:
...
#overload
def foo(name: str) -> str:
...
#overload
def foo(name: str, index: int) -> str:
...
def foo(name: Optional[str] = None, index: Optional[int] = None) -> str:
return f"name: {name}, index: {index}"
foo(1)
foo("bar", 1)
foo("bar", None)
This leads to the following type information in vs code:
And while this can help, note that this adds lots of "weird" new syntax. Its purpose - purely type hints - is not immediately obvious.
Going with Union of types usually is a better option.
Hello Ive only been coding for about 3 weeks now and I stumbled across this code that doesn't do anything when I put it in.
class calculation(object):
def multiply(self, a=5, b=6):
self.a = a
self.b = b
I know its simple but I am still new to programming, if anybody could give a brief explanation to why this doesn't work I would really appreciate it. Thanks
It doesn't work because of an indentation error. Indentation is how Python knows that a def is a method on a class rather than a plain function at the top level, how it knows which lines are part of a loop and where the loop ends, etc. You have to get it right. But you've got this:
class calculation(object):
def multiply(self, a=5, b=6):
self.a = a
self.b = b
Because the def is dedented back to the same level as the class, there's nothing inside the class. That's not legal; every compound statement (a statement that ends with a :, like a class definition) has to be followed by something indented.
On top of that, the fact that multiply takes a self parameter means it's almost certainly intended to be a method of some class.
So, to fix it:
class calculation(object):
def multiply(self, a=5, b=6):
self.a = a
self.b = b
And now, it works. But it doesn't seem to do anything, does it?
Of course it's always possible that you didn't copy the whole thing. Or that wherever you copied it from, the code was buggy and missing a line. But let's assume this really is a useful function from someone's useful code (except for the indentation error).
First, all you're doing is defining a class. If never create an instance of that class, much less call any methods on it, the class doesn't do anything. But let's assume you knew that, and you know how to create an instance and call methods. It still doesn't seem to do anything.
Most likely (again, assuming this is what you're actually asking, and that you copied the code right, and…) what this is doing is storing the operands to use later.
A realistic example of why you'd want to do that would be in some kind of expression-tree library, that calls multiply whenever it parses a *, and gradually builds up complex expressions out of simple ones, maybe so you can compile the expression to C code or do algebraic transformations on it.
But that probably sounded like gobbledegook to you, so here's a simple but silly example:
class calculation(object):
def multiply(self, a=5, b=6):
self.a = a
self.b = b
table = []
for a in range(1, 5):
row = []
for b in range(1, 5):
col = calculation()
col.multiply(a, b)
row.append(col)
table.append(row)
print('Times table')
for row in table:
for col in row:
print('{} x {} = {}'.format(col.a, col.b, col.a * col.b))
Your problem is that the function inside the class is misindented. Try indenting that. Also, for your multiply function, you probably want to return the two values multiplied:
class calculation(object):
def multiply(self, a=5, b=6):
self.a = a
self.b = b
return self.a * self.b
Using Python, I am trying to implement a set of types including a "don't care" type, for fuzzy matching. I have implemented it like so:
class Matchable(object):
def __init__(self, match_type = 'DEFAULT'):
self.match_type = match_type
def __eq__(self, other):
return (self.match_type == 'DONTCARE' or other.match_type == 'DONTCARE' \
or self.match_type == other.match_type)
Coming from an OO background, this solution seems inelegant; using the Matchable class results in ugly code. I'd prefer to eliminate match_type, and instead make each type its own class inherited from a superclass, then use type checking to do the comparisons. However type checking appears to be generally frowned upon:
http://www.canonical.org/~kragen/isinstance/
Is there are better (more pythonic) way to implement this functionality?
Note: I'm aware of the large number of questions and answers about Python "enums", and it may be that one of those answers is appropriate. The requirement for the overridden __ eq __ function complicates matters, and I haven't seen a way to use the proposed enum implementations for this case.
The best OO way I can come up with of doing this is:
class Match(object):
def __eq__(self, other):
return isinstance(self, DontCare) or isinstance(other, DontCare) or type(self) == type(other)
class DontCare(Match):
pass
class A(Match):
pass
class B(Match):
pass
d = DontCare()
a = A()
b = B()
print d == a
True
print a == d
True
print d == b
True
print a == b
False
print d == 1
True
print a == 1
False
The article you linked says that isinstance isn't always evil, and I think in your case it is appropriate. The main complaint in the article is that using isinstance to check whether an object supports a particular interface reduces opportunities to use implied interfaces, and it's a fair point. In your case, however, you would essentially be using a Dontcare class to provide an annotation for how an object should be treated in comparisons, and isinstance would be checking such an annotation, which should be perfectly. fine.
I guess you just need to check if any of the operands is a fuzzy type, no?
class Fuzzy(object):
def __eq__(*args):
def isFuzzy(obj):
return isinstance(obj, Fuzzy)
return any(map(isFuzzy, args))
Now you can do:
>>> class DefaultClass(object):
... pass
>>> class DontCareClass(Fuzzy):
... pass
>>> DefaultClass() == DontCareClass()
True
Since we're using isInstance, this will work just fine with polymorphism. This is a perfectly legitimate use of isInstance in my opinion. What you want to avoid is type checking when you can just rely on duck typing, but this is not one of those cases.
EDIT: Actually, for practical purposes, this would be perfectly fine too:
class Fuzzy(object):
def __eq__(*args):
return True
This question already has answers here:
What are metaclasses in Python?
(25 answers)
Closed 7 years ago.
I came upon this reading the python documentation on the super keyword:
If the second argument is omitted, the super object returned is unbound. If the second argument is an object, isinstance(obj, type) must be true. If the second argument is a type, issubclass(type2, type) must be true (this is useful for classmethods).
Can someone please give me an example of a distinction between passing a Type as a second argument versus passing an Object?
Is the documentation talking about an instance of an object?
Thank you.
Python's super function does different things depending on what it's arguments are. Here's a demonstration of different ways of using it:
class Base(object):
def __init__(self, val):
self.val = val
#classmethod
def make_obj(cls, val):
return cls(val+1)
class Derived(Base):
def __init__(self, val):
# In this super call, the second argument "self" is an object.
# The result acts like an object of the Base class.
super(Derived, self).__init__(val+2)
#classmethod
def make_obj(cls, val):
# In this super call, the second argument "cls" is a type.
# The result acts like the Base class itself.
return super(Derived, cls).make_obj(val)
Test output:
>>> b1 = Base(0)
>>> b1.val
0
>>> b2 = Base.make_obj(0)
>>> b2.val
1
>>> d1 = Derived(0)
>>> d1.val
2
>>> d2 = Derived.make_obj(0)
>>> d2.val
3
The 3 result is the combination of the previous modifiers: 1 (from Base.make_obj) plus 2 (from Derived.__init__).
Note that it is possible to call super with just one argument to get an "unbound" super object, it is apparently not useful for much. There's not really any reason to do this unless you want to mess around with Python internals and you really know what you're doing.
In Python 3, you can also call super with no arguments (which is equivalent to the calls in the functions above, but more magical).
Object can be any Python class instance which may or may not be user defined. But,
when you are talking about a type, it refers to the default objects/collections like a list/tuple/dict/int/str etc.
Here is a simple exploration of the two functions. I found it illuminating going through this exercise. I often will create a simple program exploring the ins and outs of simple functions and save them for reference:
#
# Testing isinstance and issubclass
#
class C1(object):
def __init__(self):
object.__init__(self)
class B1(object):
def __init__(self):
object.__init__(self)
class B2(B1):
def __init__(self):
B1.__init__(self)
class CB1(C1,B1):
def __init__(self):
# not sure about this for multiple inheritance
C1.__init__(self)
B1.__init__(self)
c1 = C1()
b1 = B1()
cb1 = CB1()
def checkInstanceType(c, t):
if isinstance(c, t):
print c, "is of type", t
else:
print c, "is NOT of type", t
def checkSubclassType(c, t):
if issubclass(c, t):
print c, "is a subclass of type", t
else:
print c, "is NOT a subclass of type", t
print "comparing isinstance and issubclass"
print ""
# checking isinstance
print "checking isinstance"
# can check instance against type
checkInstanceType(c1, C1)
checkInstanceType(c1, B1)
checkInstanceType(c1, object)
# can check type against type
checkInstanceType(C1, object)
checkInstanceType(B1, object)
# cannot check instance against instance
try:
checkInstanceType(c1, b1)
except Exception, e:
print "failed to check instance against instance", e
print ""
# checking issubclass
print "checking issubclass"
# cannot check instance against type
try:
checkSubclassType(c1, C1)
except Exception, e:
print "failed to check instance against type", e
# can check type against type
checkSubclassType(C1, C1)
checkSubclassType(B1, C1)
checkSubclassType(CB1, C1)
checkSubclassType(CB1, B1)
# cannot check type against instance
try:
checkSubclassType(C1, c1)
except Exception, e:
print "failed to check type against instance", e
Edit:
Also consider the following as isinstance can break API implementations. An example would be an object that acts like a dictionary, but is not derived from dict. isinstance might check that an object is a dictionary, even though the object supports dictionary style access:
isinstance considered harmful
Edit2:
Can someone please give me an example of a distinction between passing a Type as a second argument versus passing an Object?
After testing the above code it tells me the second parameter must be a type. So in the following case:
checkInstanceType(c1, b1)
The call will fail. It could be written:
checkInstanceType(c1, type(b1))
So if you want to check the type of one instance against another instance you have to use the type() builtin call.