extending built-in python dict class - python

I want to create a class that would extend dict's functionalities. This is my code so far:
class Masks(dict):
def __init__(self, positive=[], negative=[]):
self['positive'] = positive
self['negative'] = negative
I want to have two predefined arguments in the constructor: a list of positive and negative masks. When I execute the following code, I can run
m = Masks()
and a new masks-dictionary object is created - that's fine. But I'd like to be able to create this masks objects just like I can with dicts:
d = dict(one=1, two=2)
But this fails with Masks:
>>> n = Masks(one=1, two=2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() got an unexpected keyword argument 'two'
I should call the parent constructor init somewhere in Masks.init probably. I tried it with **kwargs and passing them into the parent constructor, but still - something went wrong. Could someone point me on what should I add here?

You must call the superclass __init__ method. And if you want to be able to use the Masks(one=1, ..) syntax then you have to use **kwargs:
In [1]: class Masks(dict):
...: def __init__(self, positive=(), negative=(), **kwargs):
...: super(Masks, self).__init__(**kwargs)
...: self['positive'] = list(positive)
...: self['negative'] = list(negative)
...:
In [2]: m = Masks(one=1, two=2)
In [3]: m['one']
Out[3]: 1
A general note: do not subclass built-ins!!!
It seems an easy way to extend them but it has a lot of pitfalls that will bite you at some point.
A safer way to extend a built-in is to use delegation, which gives better control on the subclass behaviour and can avoid many pitfalls of inheriting the built-ins. (Note that implementing __getattr__ it's possible to avoid reimplementing explicitly many methods)
Inheritance should be used as a last resort when you want to pass the object into some code that does explicit isinstance checks.

Since all you want is a regular dict with predefined entries, you can use a factory function.
def mask(*args, **kw):
"""Create mask dict using the same signature as dict(),
defaulting 'positive' and 'negative' to empty lists.
"""
d = dict(*args, **kw)
d.setdefault('positive', [])
d.setdefault('negative', [])

Related

Is there a reason why something like `list[]` raises `SyntaxError` in Python?

Let's say that I want to implement my custom list class, and I want to override __getitem__ so that the item parameter can be initialized with a default None, and behave accordingly:
class CustomList(list):
def __init__(self, iterable, default_index):
self.default_index = default_index
super().__init__(iterable)
def __getitem__(self, item=None):
if item is None:
item = self._default_index
return super().__getitem__(item)
iterable = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
my_list = CustomList(iterable, 2)
This allows for my_list[None], but it would be awesome to have something like my_list[] inherently use the default argument.
Unfortunately that raises SyntaxError, so I'm assuming that the statement is illegal at the grammar level...my question is: why? Would it conflict with some other statements?
I'm very curious about this, so thanks a bunch to anyone willing to explain!
Its not syntactically useful. There isn't a useful way to programatically use my_list[] without literally hard-coding it as such. A single piece of code can't sometimes have a variable in the list reference and other times not. In that case, why not just have a different property that gets the default?
#property
def default(self):
return super().__getitem__(self.default)
#property.setter
def default(self, val):
super().__setitem__(self.default, val)
object.__getitem__(self, val) is defined to have a required positional argument. Python is dynamic and so you can get away with changing that call signature, but that doesn't change how all the other code uses it.
All python operators have a magic method behind them and its always the case that the magic method could expose more features than the operator. Why not let + have a default? So, a = b + would be legal. Once again, that would not be syntactically useful - you could just expose a function if you want to do that.
__getitem__ always takes exactly one argument. You can kindof pass multiple arguments, but this actually just converts it into a tuple:
>>> a = []
>>> a[1, 2]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list indices must be integers or slices, not tuple
Note the "not tuple" in the error message.

Override .T (transpose) in subclass of numpy ndarray

I have a three dimensional dataset where the 1st dimension gives the type of the variable and the 2nd and 3rd dimensions are spatial indexes. I am attempting to make this data more user friendly by creating a subclass of ndarray containing the data, but with attributes that have sensible names that point to the appropriate variable dimension. One of the variable types is temperature, which I would like to represent with the attribute .T. I attempt to set it like this:
self.T = self[8,:,:]
However, this clashes with the underlying numpy attribute for transposing an array. Normally, overriding a class attribute is trivial, however in this case I get an exception when I try to re-write the attribute. The following is a minimal example of the same problem:
import numpy as np
class foo(np.ndarray):
def __new__(cls, input_array):
obj = np.asarray(input_array).view(cls)
obj.T = 100.0
return obj
foo([1,2,3,4])
results in:
Traceback (most recent call last):
File "tmp.py", line 9, in <module>
foo([1,2,3,4])
File "tmp.py", line 6, in __new__
obj.T = 100.0
AttributeError: attribute 'T' of 'numpy.ndarray' objects is not writable
I have tried using setattr(obj, 'T', 100.0) to set the attribute, but the result is the same.
Obviously, I could just give up and name my attribute .temperature, or something else. However .T will be much more eloquent for the subsequent mathematical expressions which will be done with these data objects. How can I force python/numpy to override this attribute?
For np.matrix subclass, as defined in np.matrixlib.defmatrix:
#property
def T(self):
"""
Returns the transpose of the matrix.
....
"""
return self.transpose()
T is not a conventional attribute that lives in a __dict__ or __slots__. In fact, you can see this immediately because the result of T changes if you modify the shape or contents of an array.
Since ndarray is a class written in C, it has special descriptors for the dynamic attributes it exposes. T is one of these dynamic attributes, defined as a PyGetSetDef structure. You can't override it by simple assignment, because there is nothing to assign to, but you can make a descriptor that overrides it at the class level.
As #hpaulj's answer suggests, the simplest solution may be to use a property to implement the descriptor protocol for you:
import numpy as np
class foo(np.ndarray):
#property
def T(self):
return self[8, :, :]
More complicated alternatives would be to make your own descriptor type, or even to extend the class in C and write your own PyGetSetDef structure. It all depends on what you are trying to achieve.
Following Mad Physicist and hpaulj's lead, the solution to my minimal working example is:
import numpy as np
class foo(np.ndarray):
def __new__(cls, input_array):
obj = np.asarray(input_array).view(cls)
return obj
#property
def T(self):
return 100.0
x = foo([1,2,3,4])
print("T is", x.T)
Which results in:
T is [1 2 3 4]

How to Create a Python Method in Execution Time?

This following code works fine, and shows a way to create attributes and methods in execution time:
class Pessoa:
pass
p = Pessoa( )
p.nome = 'fulano'
if hasattr(p, 'nome'):
print(p)
p.get_name = lambda self:'Sr.{}'.format(self.nome)
But, I think my way to create methods is not correct. There are another way to create a method dynamically ?
[Although this has really been answered in Steven Rumbalski's comment, pointing to two independent questions, I'm adding a short combined answer here.]
Yes, you're right that this does not correctly define a method.
>>> class C:
... pass
...
>>> p = C()
>>> p.name = 'nickie'
>>> p.get_name = lambda self: 'Dr. {}'.format(self.name)
>>> p.get_name()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: <lambda>() takes exactly 1 argument (0 given)
Here's how you can call the function that is stored in object p's attribute called get_name:
>>> p.get_name(p)
'Dr. nickie'
For properly defining an instance method dynamically, take a look at the answers to a relevant question.
If you want to define a class method dynamically, you have to define it as:
>>> C.get_name = lambda self: 'Dr. {}'.format(self.name)
Although the method will be added to existing objects, this will not work for p (as it already has its own attribute get_name). However, for a new object:
>>> q = C()
>>> q.name = 'somebody'
>>> q.get_name()
'Dr. somebody'
And (obviously), the method will fail for objects that don't have a name attribute:
>>> r = C()
>>> r.get_name()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <lambda>
AttributeError: C instance has no attribute 'name'
There are two ways to dynamically create methods in Python 3:
create a method on the class itself: just assign a function to a member ; it is made accessible to all objects of the class, even if they were created before the method was created:
>>> class A: # create a class
def __init__(self, v):
self.val = v
>>> a = A(1) # create an instance
>>> def double(self): # define a plain function
self.val *= 2
>>> A.double = double # makes it a method on the class
>>> a.double() # use it...
>>> a.val
2
create a method on an instance of the class. It is possible in Python 3 thanks to the types module:
>>> def add(self, x): # create a plain function
self.val += x
>>> a.add = types.MethodType(add, a) # make it a method on an instance
>>> a.add(2)
>>> a.val
4
>>> b = A(1)
>>> b.add(2) # chokes on another instance
Traceback (most recent call last):
File "<pyshell#55>", line 1, in <module>
b.add(2)
AttributeError: 'A' object has no attribute 'add'
>>> type(a.add) # it is a true method on a instance
<class 'method'>
>>> type(a.double)
<class 'method'>
A slight variation on method 1 (on class) can be used to create static or class methods:
>>> def static_add(a,b):
return a+b
>>> A.static_add = staticmethod(static_add)
>>> a.static_add(3,4)
7
>>> def show_class(cls):
return str(cls)
>>> A.show_class = classmethod(show_class)
>>> b.show_class()
"<class '__main__.A'>"
Here is how I add methods to classes imported from a library. If I modified the library I would lose the changes at the next library upgrade. I can't create a new derived class because I can't tell the library to use my modified instance. So I monkey patch the existing classes by adding the missing methods:
# Import the standard classes of the shapely library
import shapely.geometry
# Define a function that returns the points of the outer
# and the inner polygons of a Polygon
def _coords_ext_int_polygon(self):
exterior_coords = [self.exterior.coords[:]]
interior_coords = [interior.coords[:] for interior in self.interiors]
return exterior_coords, interior_coords
# Define a function that returns the points of the outer
# and the inner polygons of a MultiPolygon
def _coords_ext_int_multi_polygon(self):
if self.is_empty:
return [], []
exterior_coords = []
interior_coords = []
for part in self:
i, e = part.coords_ext_int()
exterior_coords += i
interior_coords += e
return exterior_coords, interior_coords
# Define a function that saves outer and inner points to a .pt file
def _export_to_pt_file(self, file_name=r'C:\WizardTemp\test.pt'):
'''create a .pt file in the format that pleases thinkdesign'''
e, i = self.coords_ext_int()
with open(file_name, 'w') as f:
for rings in (e, i):
for ring in rings:
for x, y in ring:
f.write('{} {} 0\n'.format(x, y))
# Add the functions to the definition of the classes
# by assigning the functions to new class members
shapely.geometry.Polygon.coords_ext_int = _coords_ext_int_polygon
shapely.geometry.Polygon.export_to_pt_file = _export_to_pt_file
shapely.geometry.MultiPolygon.coords_ext_int = _coords_ext_int_multi_polygon
shapely.geometry.MultiPolygon.export_to_pt_file = _export_to_pt_file
Notice that the same function definition can be assigned to two different classes.
EDIT
In my example I'm not adding methods to a class of mine, I'm adding methods to shapely, an open source library I installed.
In your post you use p.get_name = ... to add a member to the object instance p. I first define a funciton _xxx(), then I add it to the class definition with class.xxx = _xxx.
I don't know your use case, but usually you add variables to instances and you add methods to class definitions, that's why I am showing you how to add methods to the class definition instead of to the instance.
Shapely manages geometric objects and offers methods to calculate the area of the polygons, to add or subtract polygons to each other, and many other really cool things.
My problem is that I need some methods that shapely doesn't provide out of the box.
In my example I created my own method that returns the list of points of the outer profile and the list of points of the inner profiles. I made two methods, one for the Polygon class and one for the MultiPolygon class.
I also need a method to export all the points to a .pt file format. In this case I made only one method that works with both the Polygon and the MultiPolygon classes.
This code is inside a module called shapely_monkeypatch.py (see monkey patch). When the module is imported the functions with the name starting by _ are defined, then they are assigned to the existing classes with names without _. (It is a convention in Python to use _ for names of variables or functions intended for internal use only.)
I shall be maligned, pilloried, and excoriated, but... here is one way I make a keymap for an alphabet of methods within __init__(self).
def __init__(this):
for c in "abcdefghijklmnopqrstuvwxyz":
this.keymap[ord(c)] = eval(f"this.{c}")
Now, with appropriate code, I can press a key in pygame to execute the mapped method.
It is easy enough to use lambdas so one does not even need pre-existing methods... for instance, if __str__(this) is a method, capital P can print the instance string representation using this code:
this.keymap[ord('P')] = lambda: print(this)
but everyone will tell you that eval is bad.
I live to break rules and color outside the boundaries.

Is avoiding expensive __init__ a good reason to use __new__?

In my project, we have a class based on set. It can be initialised from a string, or an iterable (eg tuple) of strings, or other custom classes. When initialised with an iterable it converts each item to a particular custom class if it is not one already.
Because it can be initialised from a variety of data structures a lot of the methods that operate on this class (such as __and__) are liberal in what they accept and just convert their arguments to this class (ie initialise a new instance). We are finding this is rather slow, when the argument is already an instance of the class, and has a lot of members (it is iterating through them all and checking that they are the right type).
I was thinking that to avoid this, we could add a __new__ method to the class and just if the argument passed in is already an instance of the class, return it directly. Would this be a reasonable use of __new__?
Adding a __new__ method will not solve your problem. From the documentation for __new__:
If __new__() returns an instance of cls, then the new instance’s
__init__() method will be invoked like __init__(self[, ...]),
where self is the new instance and the remaining arguments are the
same as were passed to __new__().
In otherwords, returning the same instance will not prevent python from calling __init__.
You can verify this quite easily:
In [20]: class A:
...: def __new__(cls, arg):
...: if isinstance(arg, cls):
...: print('here')
...: return arg
...: return super().__new__(cls)
...: def __init__(self, values):
...: self.values = list(values)
In [21]: a = A([1,2,3])
In [22]: A(a)
here
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-c206e38274e0> in <module>()
----> 1 A(a)
<ipython-input-20-5a7322f37287> in __init__(self, values)
6 return super().__new__(cls)
7 def __init__(self, values):
----> 8 self.values = list(values)
TypeError: 'A' object is not iterable
You may be able to make this work if you did not implement __init__ at all, but only __new__. I believe this is what tuple does.
Also that behaviour would be acceptable only if your class is immutable (e.g. tuple does this), because the result would be sensible. If it is mutable you are asking for hidden bugs.
A more sensible approach is to do what set does: __*__ operations operate only on sets, however set also provides named methods that work with any iterable:
In [30]: set([1,2,3]) & [1,2]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-30-dfd866b6c99b> in <module>()
----> 1 set([1,2,3]) & [1,2]
TypeError: unsupported operand type(s) for &: 'set' and 'list'
In [31]: set([1,2,3]) & set([1,2])
Out[31]: {1, 2}
In [32]: set([1,2,3]).intersection([1,2])
Out[32]: {1, 2}
In this way the user can choose between speed and flexibility of the API.
A simpler approach is the one proposed by unutbu: use isinstance instead of duck-typing when implementing the operations.

Inheriting from unittest.TestCase for non-test functionality

I want to write a class to check sets using exactly the behavior that unittest.TestCase.assertEqual exhibits for testing set equality. It automatically prints a nice message saying which elements are only in the first set and which are only in the second set.
I realize I could implement similar behavior, but since it's already done nicely with unittest.TestCase.assertEqual, I'd prefer to just utilize that (so please no answers that say the unhelpful and already obvious (but not applicable in this case) advice "don't solve this with unittest.TestCase")
Here is my code for the SetChecker class:
import unittest
class SetChecker(unittest.TestCase):
"""
SetChecker(set1, set2) creates a set checker from the two passed Python set
objects. Printing the SetChecker uses unittest.TestCase.assertEqual to test
if the sets are equal and automatically reveal the elements that are in one
set but not the other if they are unequal. This provides an efficient way
to detect differences in possibly large set objects. Note that this is not
a unittest object, just a wrapper to gain access to the helpful behavior of
unittest.TestCase.assertEqual when used on sets.
"""
EQUAL_MSG = "The two sets are equivalent."
def __init__(self, set1, set2, *args, **kwargs):
assert isinstance(set1, set)
assert isinstance(set2, set)
super(self.__class__, self).__init__(*args, **kwargs)
try:
self.assertEqual(set1, set2)
self._str = self.EQUAL_MSG
self._str_lines = [self._str]
self._indxs = None
except AssertionError, e:
self._str = str(e)
self._str_lines = self._str.split('\n')
# Find the locations where a line starts with 'Items '.
# This is the fixed behavior of unittest.TestCase.
self._indxs = [i for i,y in enumerate(self._str_lines)
if y.startswith('Items ')]
def __repr__(self):
"""
Convert SetChecker object into a string to be printed.
"""
return self._str
__str__ = __repr__ # Ensure that `print` and __repr__ do the same thing.
def runTest(self):
"""
Required by any sub-class of unittest.TestCase. Solely used to inherit
from TestCase and is not implemented for any behavior.
"""
pass
def in_first_set_only(self):
"""
Return a list of the items reported to exist only in the first set. If
the sets are equivalent, returns a string saying so.
"""
return (set(self._str_lines[1:self._indxs[1]])
if self._indxs is not None else self.EQUAL_MSG)
def in_second_set_only(self):
"""
Return a list of the items reported to exist only in the second set. If
the sets are equivalent, returns a string saying so.
"""
return (set(self._str_lines[1+self._indxs[1]:])
if self._indxs is not None else self.EQUAL_MSG)
This works fine when I use it in IPython:
In [1]: from util.SetChecker import SetChecker
In [2]: sc = SetChecker(set([1,2,3, 'a']), set([2,3,4, 'b']))
In [3]: sc
Out[3]:
Items in the first set but not the second:
'a'
1
Items in the second set but not the first:
'b'
4
In [4]: print sc
Items in the first set but not the second:
'a'
1
Items in the second set but not the first:
'b'
4
In [5]: sc.in_first_set_only()
Out[5]: set(["'a'", '1'])
In [6]: sc.in_second_set_only()
Out[6]: set(["'b'", '4'])
But now I also want to write unit tests for this class. So I've made a TestSetChecker class. Here is that code:
from util.SetChecker import SetChecker
class TestSetChecker(unittest.TestCase):
"""
Test class for providing efficient comparison and printing of
the difference between to sets.
"""
def setUp(self):
"""
Create examples for testing.
"""
self.set1 = set([1, 2, 3, 'a'])
self.set2 = set([2, 3, 4, 'b'])
self.set3 = set([1,2])
self.set4 = set([1,2])
self.bad_arg = [1,2]
self.expected_first = set(['1', 'a'])
self.expected_second = set(['4', 'b'])
self.expected_equal_message = SetChecker.EQUAL_MSG
self.expected_print_string = (
"Items in the first set but not the second:\n'a'\n1\n"
"Items in the second set but not the first:\n'b'\n4")
def test_init(self):
"""
Test constructor, assertions on args, and that instance is of proper
type and has expected attrs.
"""
s = SetChecker(self.set1, self.set2)
self.assertIsInstance(s, SetChecker)
self.assertTrue(hasattr(s, "_str"))
self.assertTrue(hasattr(s, "_str_lines"))
self.assertTrue(hasattr(s, "_indxs"))
self.assertEqual(s.__repr__, s.__str__)
self.assertRaises(AssertionError, s, *(self.bad_arg, self.set1))
def test_repr(self):
"""
Test that self-printing is correct.
"""
s1 = SetChecker(self.set1, self.set2)
s2 = SetChecker(self.set3, self.set4)
self.assertEqual(str(s1), self.expected_print_string)
self.assertEqual(str(s2), self.expected_equal_message)
def test_print(self):
"""
Test that calling `print` on SetChecker is correct.
"""
s1 = SetChecker(self.set1, self.set2)
s2 = SetChecker(self.set3, self.set4)
s1_print_output = s1.__str__()
s2_print_output = s2.__str__()
self.assertEqual(s1_print_output, self.expected_print_string)
self.assertEqual(s2_print_output, self.expected_equal_message)
def test_in_first_set_only(self):
"""
Test that method gives list of set elements found only in first set.
"""
s1 = SetChecker(self.set1, self.set2)
s2 = SetChecker(self.set3, self.set4)
fs1 = s1.in_first_set_only()
fs2 = s2.in_first_set_only()
self.assertEqual(fs1, self.expected_first)
self.assertEqual(fs2, self.expected_equal_message)
def test_in_second_set_only(self):
"""
Test that method gives list of set elements found only in second set.
"""
s1 = SetChecker(self.set1, self.set2)
s2 = SetChecker(self.set3, self.set4)
ss1 = s1.in_second_set_only()
ss2 = s2.in_second_set_only()
self.assertEqual(ss1, self.expected_second)
self.assertEqual(ss2, self.expected_equal_message)
if __name__ == "__main__":
unittest.main()
As far as I can tell, TestSetChecker has no differences from the many other unit test classes that I write (apart from the specific functionality it is testing for).
Yet, I am seeing a very unusual __init__ error when I try to execute the file containing the unit tests:
EMS#computer ~/project_dir/test $ python TestSetChecker.py
Traceback (most recent call last):
File "TestSetChecker.py", line 84, in <module>
unittest.main()
File "/opt/python2.7/lib/python2.7/unittest/main.py", line 94, in __init__
self.parseArgs(argv)
File "/opt/python2.7/lib/python2.7/unittest/main.py", line 149, in parseArgs
self.createTests()
File "/opt/python2.7/lib/python2.7/unittest/main.py", line 155, in createTests
self.test = self.testLoader.loadTestsFromModule(self.module)
File "/opt/python2.7/lib/python2.7/unittest/loader.py", line 65, in loadTestsFromModule
tests.append(self.loadTestsFromTestCase(obj))
File "/opt/python2.7/lib/python2.7/unittest/loader.py", line 56, in loadTestsFromTestCase
loaded_suite = self.suiteClass(map(testCaseClass, testCaseNames))
TypeError: __init__() takes at least 3 arguments (2 given)
The directory with the Python unittest source code is read-only in my environment, so I can't add pdb or even print statements there to see what testCaseClass or testCaseNames are at this point where some __init__ fails.
But I can't see any places in my code where I'm failing to provide needed arguments to any __init__ method. I'm wondering if this has something to do with some behind-the-scenes magic with classes that inherit from unittest and with the fact that I'm importing and instantiating a class (SetChecker) within the file that is to be executed for unit tests.
Maybe it checks for all classes in the existing namespace that inherit from TestCase? If so, how do you unit-test the unit tests?
I also tried to first make SetChecker inherit from object and tried to use TestCase like a mix-in, but that created lots of MRO errors and seemed more headache than it was worth.
I've tried searching for this but it's a difficult error to search for (since it does not appear to be a straightforward problem with __init__ arguments).
I was able to work around this by making SetChecker inherit from object only, and then inside of SetChecker providing an internal class that inherits from unittest.TestCase.
The problem is that unittest.main inspects the whole namespace of the module it is run from. Any class it finds in that module that inherits from unittest.TestCase will get the full test-suite treatment (it will try to construct instances of the class for each test_ method it can find, or just for runTest if it finds no test_ methods).
In my case, since the set arguments are required, whatever it is that unittest.main is doing, it's passing some argument (probably the name of the function to treat as the test, in this case the string "runTest") but failing to pass the second required argument. Even if this worked with the signature of my class (e.g. suppose that I replaced the two distinct arguments set1 and set2 with a tuple of 2 sets), it would then immediately fail once it tried to do set operations with that string.
There doesn't appear to be an easy way to tell unittest.main to ignore a certain class or classes. So, by making SetChecker just an object that has a TestCase inside of it, unittest.main no longer finds that TestCase and no longer cares.
There was one other bug: in my test_init function, I use assertRaises which expects a callable, but had never given my SetChecker class a __call__ function.
Here's the modification to the SetChecker class that fixed this for me:
class SetChecker(object):
"""
SetChecker(set1, set2) creates a set checker from the two passed Python set
objects. Printing the SetChecker uses unittest.TestCase.assertEqual to test
if the sets are equal and automatically reveal the elements that are in one
set but not the other if they are unequal. This provides an efficient way
to detect differences in possibly large set objects. Note that this is not
a unittest object, just a wrapper to gain access to the helpful behavior of
unittest.TestCase.assertEqual when used on sets.
"""
EQUAL_MSG = "The two sets are equivalent."
class InternalTest(unittest.TestCase):
def runTest(self): pass
def __init__(self, set1, set2):
assert isinstance(set1, set)
assert isinstance(set2, set)
self.int_test = SetChecker.InternalTest()
try:
self.int_test.assertEqual(set1, set2)
self._str = self.EQUAL_MSG
self._str_lines = [self._str]
self._indxs = None
except AssertionError, e:
self._str = str(e)
self._str_lines = self._str.split('\n')
# Find the locations where a line starts with 'Items '.
# This is the fixed behavior of unittest.TestCase.
self._indxs = [i for i,y in enumerate(self._str_lines)
if y.startswith('Items ')]
#classmethod
def __call__(klass, *args, **kwargs):
"""
Makes the class callable such that calling it like a function is the
same as constructing a new instance.
"""
return klass(*args, **kwargs)
# Everything else below is the same...

Categories

Resources