I am confused about how inherited class attributes are initialized. I get an AttributeError on a child attribute that I thought was well defined.
I noticed that swapping the order of the super call to the parent after the definition does not yield an error but I do not understand why.
I have not been able to reproduce that error without using processes (I can post that test code too but I think this is lengthy enough), it seems then that the order in which the super is called does not matter.
Edit : It is probably because of running a process before counter is even defined that yiels the exception, but the problem does not happen if I use Thread instead of Process.
I might be missing something about how initializers work or doing something wrong.
Why do I get an Attribute error?
Why swapping the two line solves (or masks) the problem?
Why does this problem does not happen using threads instead of processes?
I am using the code below.
Thanks for your help.
Here is the code that reproduces the issue. I tried my best to make it stand-alone.
from multiprocessing import Process
from multiprocessing import Queue
class Parent(object):
def __init__(self):
self._stuff_queue = Queue()
self._process = Process(target=self._do_something)
self._process.start()
def _do_something(self):
while True:
stuff = self._stuff_queue.get()
if stuff is not None:
self.something(stuff)
else:
break
def feed(self, stuff):
self._stuff_queue.put(stuff)
def something(self):
raise NotImplementedError("Implement this something !")
class Child(Parent):
def __init__(self):
# --- Swapping those two lines avoids getting the AttributeError --- #
super(Child, self).__init__() # Same thing using Parent.__init__(self)
self.counter = 0
def something(self, stuff):
self.counter += 1
print "Got stuff"
c = Child()
c.feed("Hi SO!")
c.feed(None) # Just to stop
Here is the message I get :
File "/home/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "test_process.py", line 14, in _do_something
self.something(stuff)
File "test_process.py", line 30, in something
self.counter += 1
AttributeError: 'Child' object has no attribute 'counter'
Related
I am trying to mock the super class of a class with a setup similar to this:
File parent.py
class Parent:
def write(self):
*some code*
File child.py
class Child(Parent):
def write(self):
*more code*
super().write()
File mock_parent.py
class MockParent(Parent):
def write(self):
...
My goal would be to replace Parent with MockParent to improve testing of Child, by eliminating real hardware resources.
So far I tried to use mock patch with no success. I tried to patch imports, bases and super but none of these attempts had been successful. I could replace the internals of the Child object, but I would prefer to have a cleaner solution through patching potentially.
The biggest challenge is that the call to the method write of the parent class (by super().write()) is inside the subclass method, otherwise I could simply assign it the function I want to be called.
In this moment I have found this solution which needs a change to the code of the method write of the class Child. This modification must be used only during the test, while, in production code you have to used your production code.
Below I show you the file test_code.py which contains the production code and the test code:
import unittest
from unittest.mock import Mock
class Parent:
def write(self):
print("parent.write()")
class Child(Parent):
def write(self, *super_class):
print("child.write()")
# ----> Here I have changed your code
if len(super_class) > 0:
super_class[0].write()
else:
# ----> ... here you find your production code
super().write()
class MockParent(Parent):
def write(self):
print("mock_parent.write()")
class MyTestCase(unittest.TestCase):
def test_with_parent_mock(self):
print("Execution of test_with_parent_mock")
mock_parent = Mock(wraps = MockParent())
child = Child()
child.write(mock_parent)
mock_parent.write.assert_called_once()
def test_with_parent(self):
print("Execution of test_with_parent")
child = Child()
child.write()
if __name__ == '__main__':
unittest.main()
If you execute this code by the command python test_code.py, you will obtain the following output:
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
Execution of test_with_parent
child.write()
parent.write()
Execution of test_with_parent_mock
child.write()
mock_parent.write()
The output of the test method test_with_parent_mock() shows that you can substitute the write() method of the super class with an other method defined in MockParent.
Instead in the method test_with_parent() you can call the write() method of the class Child normally.
Note about TDD
What I have proposed to you is only a workaround, but I don't know if it is suited to reach your goal because I see that between the tags selected by you, there is TDD. However I hope this code could be useful for you.
I wrote a short example because I can't share my original codes.
I think the problem was caused by python itself.
Anyway, when I start a thread for a class, the program crashes if there are conditions. What is the reason? What is also the possible solution.
from threading import Thread
global dicta
dicta={"number":0,"name":"John","animal":"pig"}
class PigMan:
def __init__(self):
Thread(target= self.iAmFool).start()
def iAmFool(self):
if dicta["number"]==0:
print("It's work")
else:
print("Wtf")
PigMan()
I expected it to run smoothly, but this is the error:
Traceback (most recent call last):
File "C:/Users/Pig/Desktop/test.py", line 13, in <module>
PigMan()
File "C:/Users/Pig/Desktop/test.py", line 6, in __init__
Thread(target= self.iAmFool).start()
AttributeError: 'PigMan' object has no attribute 'iAmFool'
Your indentation is off. Python is whitespace-sensitive, thus the indentation of your code is critically important.
Reducing def iAmFool's indentation correctly creates it as part of the class PigMan, as opposed to trying to def it within __init__.
from threading import Thread
global dicta
dicta={"number":0,"name":"John","animal":"pig"}
class PigMan:
def __init__(self):
Thread(target= self.iAmFool).start()
def iAmFool(self):
if dicta["number"]==0:
print("It's work")
else:
print("Wtf")
PigMan()
Repl.it
Ok, this one has me tearing my hair out:
I have a multi-process program, with separate workers each working on a given task.
When a KeyboardInterrupt comes, I want each worker to save its internal state to a file, so it can continue where it left off next time.
HOWEVER...
It looks like the dictionary which contains information about the state is vanishing before this can happen!
How? The exit() function is accessing a more globally scoped version of the dictionary... and it turns out that the various run() (and subordinate to run()) functions have been creating their own version of the variable.
Nothing strange about that...
Except...
All of them have been using the self. keyword.
Which, if my understanding is correct, should mean they are always accessing the instance-wide version of the variable... not creating their own!
Here's a simplified version of the code:
import multiprocessing
import atexit
import signal
import sys
import json
class Worker(multiprocessing.Process):
def __init__(self, my_string_1, my_string_2):
# Inherit the __init_ from Process, very important or we will get errors
super(Worker, self).__init__()
# Make sure we know what to do when called to exit
atexit.register(self.exit)
signal.signal(signal.SIGTERM, self.exit)
self.my_dictionary = {
'my_string_1' : my_string_1,
'my_string_2' : my_string_2
}
def run(self):
self.my_dictionary = {
'new_string' : 'Watch me make weird stuff happen!'
}
try:
while True:
print(self.my_dictionary['my_string_1'] + " " + self.my_dictionary['my_string_2'])
except (KeyboardInterrupt, SystemExit):
self.exit()
def exit(self):
# Write the relevant data to file
info_for_file = {
'my_dictionary': self.my_dictionary
}
print(info_for_file) # For easier debugging
save_file = open('save.log', 'w')
json.dump(info_for_file, save_file)
save_file.close()
# Exit
sys.exit()
if __name__ == '__main__':
strings_list = ["Hello", "World", "Ehlo", "Wrld"]
instances = []
try:
for i in range(len(strings_list) - 2):
my_string_1 = strings_list[i]
my_string_2 = strings_list[i + 1]
instance = Worker(my_string_1, my_string_2)
instances.append(instance)
instance.start()
for instance in instances:
instance.join()
except (KeyboardInterrupt, SystemExit):
for instance in instances:
instance.exit()
instance.close()
On run we get the following traceback...
Process Worker-2:
Process Worker-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "<stdin>", line 18, in run
File "<stdin>", line 18, in run
KeyError: 'my_string_1'
KeyError: 'my_string_1'
In other words, even though the key my_string_1 was explicitly added during init, the run() function is accessing a new version of self.my_dictionary which does not contain that key!
Again, this would be expected if we were dealing with a normal variable (my_dictionary instead of self.my_dictionary) but I thought that self.variables were always instance-wide...
What is going on here?
Your problem can basically be represented by the following:
class Test:
def __init__(self):
self.x = 1
def run(self):
self.x = 2
if self.x != 1:
print("self.x isn't 1!")
t = Test()
t.run()
Note what run is doing.
You overwrite your instance member self.my_dictionary with incompatible data when you write
self.my_dictionary = {
'new_string' : 'Watch me make weird stuff happen!'
}
Then try to use that incompatible data when you say
print(self.my_dictionary['my_string_1']...
It's not clear precisely what your intent is when you overwrite my_dictionary, but that's why you're getting the error. You'll need to rethink your logic.
In my Python (3.6) program, I have a thread object, like so:
class MyThread(threading.Thread):
def __init__(self):
super(MyThread, self).__init__()
...
def __del__(self):
...
super(type(self), self).__del__()
def run(self):
...
used in the main program like this:
def main():
my_thread = MyThread()
my_thread.start()
...
my_thread.join()
But as soon as I try to run this program, I get the following Python crash:
Exception ignored in: <bound method MyThread.__del__ of <MyThread(Thread-6, stopped 1234)>>
Traceback (most recent call last):
File "c:/my_proj/my_program.py", line 123, in __del__
super(type(self), self).__del__()
AttributeError: 'super' object has no attribute '__del__'
Why is this, and how can it be fixed?
Is it not allowed to call the __del__() method of super explicitly like this, or what? (Google seems to tell me otherwise, but still won't give me any answer to why this happens)
super(type(self), self) is always wrong. In Python 2, you must explicitly name the current class, e.g. super(MyThread, self). In Python 3, you can simply use super():
class MyThread(threading.Thread):
def __init__(self):
super().__init__()
# ...
def run(self):
# ...
That said, if if the superclass has no __del__ then you'll get this AttributeError. If your base classes have no __del__ you can simply omit it. There is rarely a good reason to implement __del__ in your class.
If you need controlled cleanup, consider using implementing a context manager.
I'm trying to catch an Exception raised by a child process, and have been running into some issues. The gist of my code is:
class CustomException(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
def update(partition):
if os.getpid() % 2 == 0:
raise CustomException('PID was divisible by 2!')
else:
# Do something fancy
if __name__ == '__main__':
try:
some_response = get_response_from_another_method()
partition_size = 100
p = Pool(config.NUMBER_OF_PROCESSES)
for i in range(0, NUMBER_OF_PROCESSES):
partition = get_partition(some_response, partition_size)
x = p.apply_async(update, args=(partition,))
x.get()
p.close()
p.join()
except CustomException as e:
log.error('There was an error')
if email_notifier.send_notification(e.msg):
log.debug('Email notification sent')
else:
log.error('An error occurred while sending an email.')
When I run this, I am seeing:
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/multiprocessing/pool.py", line 259, in _handle_results
task = get()
TypeError: ('__init__() takes exactly 2 arguments (1 given)', <class 'CustomException'>, ())
Is there some facility to do this? Thanks!!
In short, this is something of a quirk in Python 2, and a related issue is referenced in this bug report. It has to do with how exceptions are pickled. The simplest solution is perhaps to alter CustomException so that it calls its parent class initializer. Alternatively, if you're able, I'd suggest moving to Python 3.
For example, this code works fine in both Python 2 and Python 3:
from multiprocessing import Pool
class CustomException(Exception):
pass
def foo():
raise CustomException('PID was divisible by 2!')
pool = Pool()
result = pool.apply_async(foo, [])
But if we alter CustomException so that it has a required argument:
class CustomException(Exception):
def __init__(self, required):
self.required = required
The above example results in a TypeError being raised under Python 2. It works under Python 3.
The problem is that CustomException inherits Exception's __reduce__ method, which tells Python how to pickle an instance. The inherited __reduce__ knows nothing about CustomException's call signature, so unpickling isn't done correctly.
A quick fix is to simply call the parent class's __init__:
class CustomException(Exception):
def __init__(self, msg):
super(Exception, self).__init__()
self.msg = msg
But since you really aren't doing anything special with the message, why not just define:
class CustomException(Exception):
pass