I'm new to Python so apologies in advance if this is a stupid question.
For an assignment I need to overload augmented arithmetic assignments(+=, -=, /=, *=, **=, %=) for a class myInt. I checked the Python documentation and this is what I came up with:
def __iadd__(self, other):
if isinstance(other, myInt):
self.a += other.a
elif type(other) == int:
self.a += other
else:
raise Exception("invalid argument")
self.a and other.a refer to the int stored in each class instance. I tried testing this out as follows, but each time I get 'None' instead of the expected value 5:
c = myInt(2)
b = myInt(3)
c += b
print c
Can anyone tell me why this is happening? Thanks in advance.
You need to add return self to your method. Explanation:
The semantics of a += b, when type(a) has a special method __iadd__, are defined to be:
a = a.__iadd__(b)
so if __iadd__ returns something different than self, that's what will be bound to name a after the operation. By missing a return statement, the method you posted is equivalent to one with return None.
Augmented operators in Python have to return the final value to be assigned to the name they are called on, usually (and in your case) self. Like all Python methods, missing a return statement implies returning None.
Also,
Never ever ever raise Exception, which is impossible to catch sanely. The code to do so would have to say except Exception, which will catch all exceptions. In this case you want ValueError or TypeError.
Don't typecheck with type(foo) == SomeType. In this (and virtually all) cases, isinstance works better or at least the same.
Whenever you make your own type, like myInt, you should name it with capital letters so people can recognize it as a class name.
Yes, you need "return self", it will look like this:
def __iadd__(self, other):
if isinstance(other, myInt):
self.a += other.a
return self
elif type(other) == int:
self.a += other
return self
else:
raise Exception("invalid argument")
Related
I have a class that looks more or less like this:
class Something():
def __init__(self,a=None,b=None):
self.a = a
self.b = b
I want to be able to sort it in a list, normally I'd just implement method like this:
def __lt__(self,other):
return (self.a, self.b) < (other.a, other.b)
But this will raise an error in following case:
sort([Something(1,None),Something(1,1)])
While I want is for None values to be treated as greated than or following output:
[Something(1,1),Something(1,None)]
First thing that somes to my mind is change __lt__ to:
def __lt__(self,other):
if self.a and other.a:
if self.a != other.a:
return self.a < other.a
elif self.a is None:
return True
elif other.a is None:
return False
if self.b and other.b:
if self.b != other.b:
return self.b < other.b
elif self.b is None:
return True
return False
This would give me the correct results but its just ugly and python usually has a simpler way, and I don't really want to do it for each variable that I use in sorting of my full class(omitted from here to make problem clearer).
So what is the pythonic way of solving this?
Note
I also tried following but I'm assuming that even better is possible:
This would:
def __lt__(self,other):
sorting_attributes = ['a', 'b']
for attribute in sorting_attributes:
self_value = getattr(self,attribute)
other_value = getattr(other,attribute)
if self_value and other_value:
if self_value != other_value:
return self_value < other_value
elif self_value is None:
return True
elif self_value is None:
return False
Really trying to internalize the Zen of Pyhton and I know that my code is ugly so how do I fix it?
A completely different design I thought of later (posted separately because it's so different it should really be evaluated independently):
Map all your attributes to tuples, where the first element of every tuple is a bool based on the None-ness of the attribute, and the second is the attribute value itself. None/non-None mismatches would short-circuit on the bool representing None-ness preventing the TypeError, everything else would fall back to comparing the good types:
def __lt__(self, other):
def _key(attr):
# Use attr is not None to make None less than everything, is None for greater
return (attr is None, attr)
return (_key(self.a), _key(self.b)) < (_key(other.a), _key(other.b))
Probably slightly slower than my other solution in the case where no None/non-None pair occurs, but much simpler code. It also has the advantage of continuing to raise TypeErrors when mismatched types other than None/non-None arise, rather than potentially misbehaving. I'd definitely call this one my Pythonic solution, even if it is slightly slower in the common case.
An easy way to do this is to convert None to infinity, i.e. float('inf'):
def __lt__(self, other):
def convert(i):
return float('inf') if i is None else i
return [convert(i) for i in (self.a, self.b)] < [convert(i) for i in (other.a, other.b)]
A solution for the general case (where there may not be a convenient "bigger than any value" solution, and you don't want the code to grow more complex as the number of attributes increases), which still operates as fast as possible in the presumed common case of no None values. It does assume TypeError means None was involved, so if you're likely to have mismatched types besides None, this gets more complicated, but frankly, a class design like that is painful to contemplate. This works for any scenario with two or more keys (so attrgetter returns a tuple) and only requires changing the names used to construct the attrgetter to add or remove fields to compare.
def __lt__(self, other, _key=operator.attrgetter('a', 'b')):
# Get the keys once for both inputs efficiently (avoids repeated lookup)
sattrs = _key(self)
oattrs = _key(other)
try:
return sattrs < oattrs # Fast path for no Nones or only paired Nones
except TypeError:
for sattr, oattr in zip(sattrs, oattrs):
# Only care if exactly one is None, because until then, must be equal, or TypeError
# wouldn't occur as we would have short-circuited
if (sattr is None) ^ (oattr is None):
# Exactly one is None, so if it's the right side, self is lesser
return oattr is None
# TypeError implied we should see a mismatch, so assert this to be sure
# we didn't have a non-None related type mismatch
assert False, "TypeError raised, but no None/non-None pair seen
A useful feature of this design is that under no circumstances are rich comparisons invoked for any given attribute more than once; the failed attempt at the fast path proves that there must (assuming invariant of types being either compatible or None golds) be a run of zero or more attribute pairs with equal values, followed by a None/non-None mismatch. Since everything we care about is known equal or a None/non-None mismatch, we don't need to invoke potentially expensive rich comparisons again, we just do cheap identity testing to find the None/non-None mismatch and then return based on which side was None.
Note: while the accepted answer achieves the result I wanted, and #ecatmur answer provides a more comprehensive option, I feel it's very important to emphasize that my use case is a bad idea in the first place. This is explained very well in #Jason Orendorff answer below.
Note: this question is not a duplicate of the question about sys.maxint. It has nothing to do with sys.maxint; even in python 2 where sys.maxint is available, it does NOT represent largest integer (see the accepted answer).
I need to create an integer that's larger than any other integer, meaning an int object which returns True when compared to any other int object using >. Use case: library function expects an integer, and the only easy way to force a certain behavior is to pass a very large integer.
In python 2, I can use sys.maxint (edit: I was wrong). In python 3, math.inf is the closest equivalent, but I can't convert it to int.
Since python integers are unbounded, you have to do this with a custom class:
import functools
#functools.total_ordering
class NeverSmaller(object):
def __le__(self, other):
return False
class ReallyMaxInt(NeverSmaller, int):
def __repr__(self):
return 'ReallyMaxInt()'
Here I've used a mix-in class NeverSmaller rather than direct decoration of ReallyMaxInt, because on Python 3 the action of functools.total_ordering would have been prevented by existing ordering methods inherited from int.
Usage demo:
>>> N = ReallyMaxInt()
>>> N > sys.maxsize
True
>>> isinstance(N, int)
True
>>> sorted([1, N, 0, 9999, sys.maxsize])
[0, 1, 9999, 9223372036854775807, ReallyMaxInt()]
Note that in python2, sys.maxint + 1 is bigger than sys.maxint, so you can't rely on that.
Disclaimer: This is an integer in the OO sense, it is not an integer in the mathematical sense. Consequently, arithmetic operations inherited from the parent class int may not behave sensibly. If this causes any issues for your intended use case, then they can be disabled by implementing __add__ and friends to just error out.
Konsta Vesterinen's infinity.Infinity would work (pypi), except that it doesn't inherit from int, but you can subclass it:
from infinity import Infinity
class IntInfinity(Infinity, int):
pass
assert isinstance(IntInfinity(), int)
assert IntInfinity() > 1e100
Another package that implements "infinity" values is Extremes, which was salvaged from the rejected PEP 326; again, you'd need to subclass from extremes.Max and int.
Use case: library function expects an integer, and the only easy way to force a certain behavior is to pass a very large integer.
This sounds like a flaw in the library that should be fixed in its interface. Then all its users would benefit. What library is it?
Creating a magical int subclass with overridden comparison operators might work for you. It's brittle, though; you never know what the library is going to do with that object. Suppose it converts it to a string. What should happen? And data is naturally used in different ways as a library evolves; you may update the library one day to find that your trick doesn't work anymore.
It seems to me that this would be fundamentally impossible. Let's say you write a function that returns this RBI ("really big int"). If the computer is capable of storing it, then someone else could write a function that returns the same value. Is your RBI greater than itself?
Perhaps you can achieve the desired result with something like #wim's answer: Create an object that overrides the comparison operators to make "<" always return false and ">" always return true. (I haven't written a lot of Python. In most object-oriented languages, this would only work if the comparison puts your value first, IF RBI>x. If someone writes the comparison the other way, IF x>RBI, it will fail because the compiler doesn't know how to compare integers to a user-defined class.)
In Python 3.5, you can do:
import math
test = math.inf
And then:
test > 1
test > 10000
test > x
Will always be true. Unless of course, as pointed out, x is also infinity or "nan" ("not a number").
How can I represent an infinite number in Python?
Answered by #WilHall
You should not be inheriting from int unless you want both its interface and its implementation. (Its implementation is an automatically-widening set of bits representing a finite number. You clearly dont' want that.) Since you only want the interface, then inherit from the ABC Integral. Thanks to #ecatmur's answer, we can use infinity to deal with the nitty-gritty of infinity (including negation). Here is how we could combine infinity with the ABC Integral:
import pytest
from infinity import Infinity
from numbers import Integral
class IntegerInfinity(Infinity, Integral):
def __and__(self, other):
raise NotImplementedError
def __ceil__(self):
raise NotImplementedError
def __floor__(self):
raise NotImplementedError
def __int__(self):
raise NotImplementedError
def __invert__(self, other):
raise NotImplementedError
def __lshift__(self, other):
raise NotImplementedError
def __mod__(self, other):
raise NotImplementedError
def __or__(self, other):
raise NotImplementedError
def __rand__(self, other):
raise NotImplementedError
def __rlshift__(self, other):
raise NotImplementedError
def __rmod__(self, other):
raise NotImplementedError
def __ror__(self, other):
raise NotImplementedError
def __round__(self):
raise NotImplementedError
def __rrshift__(self, other):
raise NotImplementedError
def __rshift__(self, other):
raise NotImplementedError
def __rxor__(self, other):
raise NotImplementedError
def __trunc__(self):
raise NotImplementedError
def __xor__(self, other):
raise NotImplementedError
def test():
x = IntegerInfinity()
assert x > 2
assert not x < 3
assert x >= 5
assert not x <= -10
assert x == x
assert not x > x
assert not x < x
assert x >= x
assert x <= x
assert -x == -x
assert -x <= -x
assert -x <= x
assert -x < x
assert -x < -1000
assert not -x < -x
with pytest.raises(Exception):
int(x)
with pytest.raises(Exception):
x | x
with pytest.raises(Exception):
ceil(x)
This can be run with pytest to verify the required invariants.
Another way to do this (very much inspired by wim's answer) might be an object that isn't infinite, but increases on the fly as needed.
Here's what I have in mind:
from functools import wraps
class AlwaysBiggerDesc():
'''A data descriptor that always returns a value bigger than instance._compare'''
def __get__(self, instance, owner):
try:
return instance._compare + 1
except AttributeError:
return instance._val
def __set__(self, instance, value):
try:
del instance._compare
except AttributeError:
pass
instance._val = value
class BiggerThanYou(int):
'''A class that behaves like an integer but that increases as needed so as to be
bigger than "other" values. Defaults to 1 so that instances are considered
to be "truthy" for boolean comparisons.'''
val = AlwaysBiggerDesc()
def __getattribute__(self, name):
f = super().__getattribute__(name)
try:
intf = getattr(int,name)
except AttributeError:
intf = None
if f is intf:
#wraps(f)
def wrapper(*args):
try:
self._compare = args[1]
except IndexError:
self._compare = 0 # Note: 1 will be returned by val descriptor
new_bigger = BiggerThanYou()
try:
new_bigger.val = f(self.val, *args[1:])
except IndexError:
new_bigger.val = f(self.val)
return new_bigger
return wrapper
else:
return f
def __repr__(self):
return 'BiggerThanYou()'
def __str__(self):
return '1000...'
Something like this might avoid a lot of weird behavior that one might not expect. Note that with this kind of approach, if two BiggerThanYou instances are involved in an operation, the LHS would be considered bigger than the RHS.
EDIT: currently this is not working- I'll fix it later. it seems I am being bitten by the special method lookup functionality.
I'm looking for the most efficient way of comparing the contents of two class instances. I have a list containing these class instances, and before appending to the list I want to determine if their property values are the same. This may seem trivial to most, but after perusing these forums I wasn't able specific to what I'm trying to do. Also note that I don't have an programming background.
This is what I have so far:
class BaseObject(object):
def __init__(self, name=''):
self._name = name
def __repr__(self):
return '<{0}: \'{1}\'>'.format(self.__class__.__name__, self.name)
def _compare(self, other, *attributes):
count = 0
if isinstance(other, self.__class__):
if len(attributes):
for attrib in attributes:
if (attrib in self.__dict__.keys()) and (attrib in other.__dict__.keys()):
if self.__dict__[attrib] == other.__dict__[attrib]:
count += 1
return (count == len(attributes))
else:
for attrib in self.__dict__.keys():
if (attrib in self.__dict__.keys()) and (attrib in other.__dict__.keys()):
if self.__dict__[attrib] == other.__dict__[attrib]:
count += 1
return (count == len(self.__dict__.keys()))
def _copy(self):
return (copy.deepcopy(self))
Before adding to my list, I'd do something like:
found = False
for instance in myList:
if instance._compare(newInstance):
found = True
Break
if not found: myList.append(newInstance)
However I'm unclear whether this is the most efficient or python-ic way of comparing the contents of instances of the same class.
Implement a __eq__ special method instead:
def __eq__(self, other, *attributes):
if not isinstance(other, type(self)):
return NotImplemented
if attributes:
d = float('NaN') # default that won't compare equal, even with itself
return all(self.__dict__.get(a, d) == other.__dict__.get(a, d) for a in attributes)
return self.__dict__ == other.__dict__
Now you can just use:
if newInstance in myList:
and Python will automatically use the __eq__ special method to test for equality.
In my version I retained the ability to pass in a limited set of attributes:
instance1.__eq__(instance2, 'attribute1', 'attribute2')
but using all() to make sure we only test as much as is needed.
Note that we return NotImplemented, a special singleton object to signal that the comparison is not supported; Python will ask the other object if it perhaps supports equality testing instead for that case.
You can implement the comparison magic method __eq__(self, other) for your class, then simply do
if instance == newInstance:
As you apparently don't know what attributes your instance will have, you could do:
def __eq__(self, other):
return isinstance(other, type(self)) and self.__dict__ == other.__dict__
Your method has one major flaw: if you have reference cycles with classes that both derive from BaseObject, your comparison will never finish and die with a stack overflow.
In addition, two objects of different classes but with the same attribute values compare as equal. Trivial example: any instance of BaseObject with no attributes will compare as equal to any instance of a BaseObject subclass with no attributes (because if issubclass(C, B) and a is an instance of C, then isinstance(a, B) returns True).
Finally, rather than writing a custom _compare method, just call it __eq__ and reap all the benefits of now being able to use the == operator (including contain testing in lists, container comparisons, etc.).
As a matter of personal preference, though, I'd stay away from that sort-of automatically-generated comparison, and explicitly compare explicit attributes.
Suppose I define a class A and I don't want anyone to write an inequality of that class without getting away.
class A():
def __ne__(self, other):
return NotImplemented
print(A() != A())
But this prints out True and doesn't raise a TypeError although I have deliberately "turned off" the != operator?
When you return NotImplemented you indicate that you do not know if __ne__ should return True or False.
Normally, Python will then swap the operands; if a != b results in NotImplemented, it'll try b != a instead. That'll fail here too, since you use the same type on both sides of the operator. For the != operator, Python will then fall back to comparing their memory addresses, and these are not the same (two separate instances), so False is returned.
See the do_richcompare C function for details.
You'll have to raise TypeError() manually if that is your expected outcome.
It gives you True because you are returning an exception, an not raising it. This means that you are returning a non-Null object (the exception) as result of the test. Non-Null objects are evaluated as True unless otherwise specified.
Remember that the exception is a normal object until you raise it.
so you should have a method looking like this:
class A():
def __ne__(self, other):
raise NotImplementedError
class TrafficData(object):
def __init__(self):
self.__data = {}
def __getitem__(self, epoch):
if not isinstance(epoch, int):
raise TypeError()
return self.__data.setdefault(epoch, ProcessTraffic())
def __iadd__(self, other):
for epoch, traffic in other.iteritems():
# these work
#existing = self[epoch]
#existing += traffic
# this does not
self[epoch] += traffic # here the exception is thrown
return self
In the above trimmed down code, I do not expect an item assignment, yet apparently one is occurring on the marked line, and throwing the following exception:
File "nethogs2.py", line 130, in __iadd__
self[epoch] += traffic
TypeError: 'TrafficData' object does not support item assignment
However if I instead use the preceding 2 commented out lines, no exception is thrown.
As I see it, the 2 should behave in the same way. self[epoch] returns a reference to an object, and it's modified in place through that objects __iadd__. What am I misunderstanding here? I frequently run into this problem when using dictionaries.
Update0
It's probably worth pointing out that the values in self.__data have __iadd__ defined, but not __add__, and I'd much prefer to modify the value in place if possible. I would also like to avoid creating a __setitem__ method.
Update1
Below is a test case demonstrating the problem, I've left the code above for existing answers.
class Value(object):
def __init__(self, initial=0):
self.a = initial
def __iadd__(self, other):
self.a += other
return self
def __str__(self):
return str(self.a)
class Blah(object):
def __init__(self):
self.__data = {}
def __getitem__(self, key):
return self.__data.setdefault(key, Value())
a = Blah()
b = a[1]
b += 1
print a[1]
a[1] += 2
print a[1]
What you are exactly doing in:
self[epoch] += traffic
is:
self[epoch] = self[epoch] + traffic
But you haven't defined __setitem__ method, so you can do that on self.
You also need:
def __setitem__(self, epoch, value):
self.__data[epoch] = value
or something similar.
It's probably worth pointing out that
the values in self.__data have
__iadd__ defined, but not __add__, and I'd much prefer to modify the value in
place if possible.
To add some precision to previous answers, under the circumstances you describe, self[epoch] += traffic translates exactly to:
self[epoch] = self[epoch].__iadd__(traffic)
So if all you want are the side effects of __iadd__, without the item-assignment part, your choices are limited to the workaround that you've already identified in the comments in the code you've posted, or calling __iadd__ yourself -- possibly through the operator module, though I believe operator.__iadd__(self[epoch], traffic) has no added value compared to the simpler self[epoch].__iadd__(traffic) (when self[epoch] does have a __iadd__ method).
The code:
self[epoch] += traffic
Is syntactic sugar for:
self[epoch] = self[epoch] + traffic
So the assignment is not unexpected it is the assignment in +=. So you also need to override the __setitem__() method.