Test equality of two functions in python - python

I want to make two functions equal to each other, like this:
def fn_maker(fn_signature):
def _fn():
pass
_fn.signature = fn_signature
return _fn
# test equality of two function instances based on the equality of their signature values
>>> fa = fn_maker(1)
>>> fb = fn_maker(1)
>>> fc = fn_maker(2)
>>> fa == fb # should be True, same signature values
True
>>> fa == fc # should be False, different signature values
False
How should I do it? I know I could probably override eq and ne if fa, fb, fc are instances of some class. But here eq is not in dir(fa) and adding it the list doesnt work.
I figured out some workaround like using a cache, e.g.,
def fn_maker(fn_signature):
if fn_signature in fn_maker.cache:
return fn_maker.cache[fn_signature]
def _fn():
pass
_fn.signature = fn_signature
fn_maker.cache[fn_signature] = _fn
return _fn
fn_maker.cache = {}
By this way there is a guarantee that there is only one function for the same signature value (kinda like a singleton). But I am really looking for some neater solutions.

If you turned your functions into instances of some class that overrides __call__() as well as the comparison operators, it will be very easy to achieve the semantics you want.

It is not possible to override the __eq__ implementation for functions (tested with Python 2.7)
>>> def f():
... pass
...
>>> class A(object):
... pass
...
>>> a = A()
>>> a == f
False
>>> setattr(A, '__eq__', lambda x,y: True)
>>> a == f
True
>>> setattr(f.__class__, '__eq__', lambda x,y: True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't set attributes of built-in/extension type 'function'

I don't think it's possible.
But overriding __call__ seems a nice solution to me.

Related

Is it a good idea to use "is" to check which function is contained in a variable? [duplicate]

This question already has answers here:
How should functions be tested for equality or identity?
(4 answers)
Closed 2 years ago.
I have a variable that contains a function.
def foo(x):
return x + 42
def bar(x):
return x - 42
my_var = foo
I want to check if that function is a certain function. Should I use is or ==?
my_var == foo
my_var == bar
and
my_var is foo
my_var is bar
both return what I expect.
They are the same thing for a function object. The == operator calls the __eq__ function to perform the comparison. The function object does not define an __eq__ method:
>>> def foo():
... pass
...
>>> foo.__eq__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'function' object has no attribute '__eq__'
Therefore the interpreter uses the default implementation in PyObject:
res = (v == w) ? Py_True : Py_False;
which is basically a pointer comparison, essentially the same as is.
No, you should use ==.
A good rule of thumb is only use is when doing is None or is not None and nowhere else. In this case, comparing functions with is works, but if you try to compare a method of an instance of a class with itself you'll get False (even on the same instance), whereas comparing with == returns what you expect:
>>> class A:
... def my_method(self):
... pass
...
>>> a = A()
>>> a.my_method is a.my_method
False
>>> a.my_method == a.my_method
True
Better to avoid having to remember this and always compare functions with ==.
See this question: Why don't methods have reference equality?
Is checks for the identity of an object. If you assign foo to myvar, then an alias is created and they both have the same id (in the case of functions at least).
Checking 2 functions for equivalence from a mathematical standpoint entails checking the equivalence of the domains and codomains of both functions.
So is is better.

How to mock in-place arithmetic operators

I'm trying to mock the in-place operators' magic methods like __iadd__ with MagicMock from unittest.mock, but the call assertion unexpectedly fails:
>>> from unittest.mock import MagicMock
>>> m = MagicMock()
>>> m += 1
>>> m.__iadd__.assert_called_once() # This expected NOT to fail
Traceback (most recent call last):
...
AssertionError: Expected '__iadd__' to have been called once. Called 0 times.
Mocking other magic methods works fine:
>>> m = MagicMock()
>>> m + 1
>>> m.__add__.assert_called_once()
>>> # No error
After doing m += 1 sets m to a new MagicMock instance since all mock methods return new mocks. We override __iadd__ in regular classes like this:
class A:
def __iadd__(self, other):
...
return self # <-- We must return self
But all mock's methods including __iadd__ looks like:
def __iadd__(self, *args, **kwargs):
...
return MagicMock()
In my opinion that's the reason why it fails.
So, how do I properly mock in-place arithmetic magic methods?
Change the return value of __iadd__ like this:
>>> m = MagicMock()
>>> m.__iadd__.return_value = m
>>> m += 1
>>> m.__iadd__.assert_called_once()
>>> m.__iadd__.call_count
1

compare two custom lists python

I'm having trouble comparing two list of objects in python
I'm converting a message into
class StatusMessage(object):
def __init__(self, conversation_id, platform):
self.__conversation_id = str(conversation_id)
self.__platform = str(platform)
#property
def conversation_id(self):
return self.__conversation_id
#property
def platform(self):
return self.__platform
Now when I create two lists of type StatusMessage
>>> expected = []
>>> expected.append(StatusMessage(1, "abc"))
>>> expected.append(StatusMessage(2, "bbc"))
>>> actual = []
>>> actual.append(StatusMessage(1, "abc"))
>>> actual.append(StatusMessage(2, "bbc"))
and then I compare the two lists using
>>> cmp(actual, expected)
or
>>> len(set(expected_messages_list).difference(actual_list)) == 0
I keep getting failures.
When I debug and actually compare for each item within the list like
>>> actual[0].conversation_id == expected[0].conversation_id
>>> actual[0].platform == expected[0].platform
then I always see
True
Doing below returns -1
>>> cmp(actual[0], expected[0])
why is this so. What am I missing???
You must tell python how to check two instances of class StatusMessage for equality.
For example, adding the method
def __eq__(self,other):
return (self is other) or (self.conversation_id, self.platform) == (other.conversation_id, other.platform)
will have the following effect:
>>> cmp(expected,actual)
0
>>> expected == actual
True
If you want to use cmp with your StatusMessage objects, consider implementing the __lt__ and __gt__ methods as well. I don't know by which rule you want to consider one instance lesser or greater than another instance.
In addition, consider returning False or error-checking for comparing a StatusMessage object with an arbitrary object that has no conversation_id or platform attribute. Otherwise, you will get an AttributeError:
>>> actual[0] == 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "a.py", line 16, in __eq__
return (self is other) or (self.conversation_id, self.platform) == (other.conversation_id, other.platform)
AttributeError: 'int' object has no attribute 'conversation_id'
You can find one reason why the self is other check is a good idea here (possibly unexpected results in multithreaded applications).
Because you are trying to compare two custom objects, you have to define what makes the objects equal or not. You do this by defining the __eq__() method on the StatusMessage class:
class StatusMessage(object):
def __eq__(self, other):
return self.conversation_id == other.conversation_id and
self.platform == other.platform

How to use a DefaultDict with a lambda expression to make the default changeable?

DefaultDicts are useful objects to be able to have a dictionary that can create new keys on the fly with a callable function used to define the default value. eg. Using str to make an empty string the default.
>>> food = defaultdict(str)
>>> food['apple']
''
You can also use lambda to make an expression be the default value.
>>> food = defaultdict(lambda: "No food")
>>> food['apple']
'No food'
However you can't pass any parameters to this lambda function, that causes an error when it tries to be called, since you can't actually pass a parameter to the function.
>>> food = defaultdict(lambda x: "{} food".format(x))
>>> food['apple']
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
food['apple']
TypeError: <lambda>() takes exactly 1 argument (0 given)
Even if you try to supply the parameter
>>> food['apple'](12)
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
food['apple']
TypeError: <lambda>() takes exactly 1 argument (0 given)
How could these lambda functions be responsive rather than a rigid expression?
Using a variable in the expression can actually circumvent this somewhat.
>>> from collections import defaultdict
>>> baseLevel = 0
>>> food = defaultdict(lambda: baseLevel)
>>> food['banana']
0
>>> baseLevel += 10
>>> food['apple']
10
>>> food['banana']
0
The default lambda expression is tied to a variable that can change without affecting the other keys its already created. This is particularly useful when it can be tied to other functions that only evaluate when a non existant key is being accessed.
>>> joinTime = defaultdict(lambda: time.time())
>>> joinTime['Steven']
1432137137.774
>>> joinTime['Catherine']
1432137144.704
>>> for customer in joinTime:
print customer, joinTime[customer]
Catherine 1432137144.7
Steven 1432137137.77
Ugly but may be useful to someone:
class MyDefaultDict(defaultdict):
def __init__(self, func):
super(self.__class__, self).__init__(self._func)
self.func = func
def _func(self):
return self.func(self.cur_key)
def __getitem__(self, key):
self.cur_key = key
return super().__getitem__(self.cur_key)

Method inside a method in Python

I have seen source code where more than one methods are called on an object eg x.y().z() Can someone please explain this to me, does this mean that z() is inside y() or what?
This calls the method y() on object x, then the method z() is called on the result of y() and that entire line is the result of method z().
For example
friendsFavePizzaToping = person.getBestFriend().getFavoritePizzaTopping()
This would result in friendsFavePizzaTopping would be the person's best friend's favorite pizza topping.
Important to note: getBestFriend() must return an object that has the method getFavoritePizzaTopping(). If it does not, an AttributeError will be thrown.
Each method is evaluated in turn, left to right. Consider:
>>> s='HELLO'
>>> s.lower()
'hello'
>>> s='HELLO '
>>> s.lower()
'hello '
>>> s.lower().strip()
'hello'
>>> s.lower().strip().upper()
'HELLO'
>>> s.lower().strip().upper().replace('H', 'h')
'hELLO'
The requirement is that the object to the left in the chain has to have availability of the method on the right. Often that means that the objects are similar types -- or at least share compatible methods or an understood cast.
As an example, consider this class:
class Foo:
def __init__(self, name):
self.name=name
def m1(self):
return Foo(self.name+'=>m1')
def m2(self):
return Foo(self.name+'=>m2')
def __repr__(self):
return '{}: {}'.format(id(self), self.name)
def m3(self):
return .25 # return is no longer a Foo
Notice that as a type of immutable, each return from Foo is a new object (either a new Foo for m1, m2 or a new float). Now try those methods:
>>> foo
4463545376: init
>>> foo.m1()
4463545304: init=>m1
^^^^ different object id
>>> foo
4463545376: init
^^^^ foo still the same because you need to assign it to change
Now assign:
>>> foo=foo.m1().m2()
>>> foo
4464102576: init=>m1=>m2
Now use m3() and it will be a float; not a Foo anymore:
>>> foo=foo.m1().m2().m3()
>>> foo
.25
Now a float -- can't use foo methods anymore:
>>> foo.m1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'float' object has no attribute 'm1'
But you can use float methods:
>>> foo.as_integer_ratio()
(1, 4)
In the case of:
x.y().z()
You're almost always looking at immutable objects. Mutable objects don't return anything that would HAVE a function like that (for the most part, but I'm simplifying). For instance...
class x:
def __init__(self):
self.y_done = False
self.z_done = False
def y(self):
new_x = x()
new_x.y_done = True
return new_x
def z(self):
new_x = x()
new_x.z_done = True
return new_x
You can see that each of x.y and x.z returns an x object. That object is used to make the consecutive call, e.g. in x.y().z(), x.z is not called on x, but on x.y().
x.y().z() =>
tmp = x.y()
result = tmp.z()
In #dawg's excellent example, he's using strings (which are immutable in Python) whose methods return strings.
string = 'hello'
string.upper() # returns a NEW string with value "HELLO"
string.upper().replace("E","O") # returns a NEW string that's based off "HELLO"
string.upper().replace("E","O") + "W"
# "HOLLOW"
The . "operator" is Python syntax for attribute access. x.y is (nearly) identical to
getattr(x, 'y')
so x.y() is (nearly) identical to
getattr(x, 'y')()
(I say "nearly identical" because it's possible to customize attribute access for a user-defined class. From here on out, I'll assume no such customization is done, and you can assume that x.y is in fact identical to getattr(x, 'y').)
If the thing that x.y() returns has an attribute z such that
foo = getattr(x, 'y')
bar = getattr(foo(), 'z')
is legal, then you can chain the calls together without needing the name foo in the middle:
bar = getattr(getattr(x, 'y')(), 'z')
Converting back to dot notation gives you
bar = getattr(x.y(), 'z')
or simply
bar = x.y().z()
x.y().z() means that the x object has the method y() and the result of x.y() object has the method z() . Now if you first want to apply the method y() on x and then on the result want to apply the z() method, you will write x.y().z(). This is like,
val = x.y()
result = val.z()
Example:
my_dict = {'key':'value'}
my_dict is a dict type object. my_dict.get('key') returns 'value' which is a str type object. now I can apply any method of str type object on it. which will be like,
my_dict.get('key').upper()
This will return 'VALUE'.
That is (sometimes a sign of) bad code.
It violates The law of Demeter. Here is a quote from Wikipedia explaining what is meant:
Each unit should have only limited knowledge about other units: only units "closely" related to the current unit.
Each unit should only talk to its friends; don't talk to strangers.
Only talk to your immediate friends.
Suppose you have a car, which itself has an engine:
class Car:
def __init__(self):
self._engine=None
#property
def engine(self):
return self._engine
#engine.setter
def engine(self, value):
self._engine = value
class Porsche_engine:
def start(self):
print("starting")
So if you make a new car and set the engine to Porsche you could do the following:
>>> from car import *
>>> c=Car()
>>> e=Porsche_engine()
>>> c.engine=e
>>> c.engine.start()
starting
If you are maing this call from an Object, it has not only knowledge of a Car object, but has too knowledge of Engine, which is bad design.
Additionally: if you do not know whether a Car has an engine, calling directly start
>>> c=Car()
>>> c.engine.start()
May result in an Error
AttributeError: 'NoneType' object has no attribute 'start'
Edit:
To avoid (further) misunterstandings and misreadings, from what I am saying.
There are two usages:
1) as I pointed out, Objects calling methods on other objects, returned from a third object is a violation of LoD. This is one way to read the question.
2) an exception to that is method chaining, which is not bad design.
And a better design would be, if the Car itself had a start()-Method which delegates to the engine.

Categories

Resources