I'm trying to run a file in python and inside of it is a class:
class MyClass(threading.Thread):
def __init__(self, a, b, c, d):
threading.Thread.__init__(self)
self.varA = a
self.varB = b
self.varC = c
self.varD = d
print (self)
self.run()
def run(self):
...
in my file i create several threads, but i have this treaceback:
Exception in thread (nameThread):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
TypeError: 'MyClass' object is not callable
this happends with all threads.
I'm confused
in MainThread I print after creation the state of every thread and first it's say 'started' but just after that it's say 'stopped'.
Any help is appreciated, thanks.
sorry for misspelling
long time without write in english
here is the code that start the threads:
for i in range(1,X):
print ('inside' + str(i)) for debug
nomb = 'thred' + str(i)
t = threading.Thread(target=surtidor(i, fin, estado, s), name = 'THREAD' + str(i))
hilos.append(t)
t.start()
print (hilos) # for debug
Hi again updating the situation:
Now I do what Tim Peters say, I call start() .-
Now the threads really run but first they throw the same exception, I know they run because they run a loop and in every repeat they print their names.
Any ideas why is that?
To emphasize what was already said in comments: DO NOT CALL .run(). As the docs say, .start()
arranges for the object’s run() method to be invoked
in a separate thread of control.
That's the only way .run() is intended to be used: invoked automatically by - and only by - .start().
That said, I'm afraid you haven't shown us the real cause of your trouble. You need to show the code you use to create and to start the threads. What you have shown cannot produce the error you're seeing.
You should not call run in init
Here is what I would expect the normal use case of a threading class
import threading
class MyClass(threading.Thread):
def __init__(self, a, b, c, d):
threading.Thread.__init__(self)
self.varA = a
self.varB = b
self.varC = c
self.varD = d
print (self)
# self.run()
def run(self):
print self.varA
if __name__ == '__main__':
mc = MyClass('a', 'b', 'c', 'd')
mc.start()
You should not leave out the code in def run(). It is hard to tell the exact cause of the problem
Related
Ok, this one has me tearing my hair out:
I have a multi-process program, with separate workers each working on a given task.
When a KeyboardInterrupt comes, I want each worker to save its internal state to a file, so it can continue where it left off next time.
HOWEVER...
It looks like the dictionary which contains information about the state is vanishing before this can happen!
How? The exit() function is accessing a more globally scoped version of the dictionary... and it turns out that the various run() (and subordinate to run()) functions have been creating their own version of the variable.
Nothing strange about that...
Except...
All of them have been using the self. keyword.
Which, if my understanding is correct, should mean they are always accessing the instance-wide version of the variable... not creating their own!
Here's a simplified version of the code:
import multiprocessing
import atexit
import signal
import sys
import json
class Worker(multiprocessing.Process):
def __init__(self, my_string_1, my_string_2):
# Inherit the __init_ from Process, very important or we will get errors
super(Worker, self).__init__()
# Make sure we know what to do when called to exit
atexit.register(self.exit)
signal.signal(signal.SIGTERM, self.exit)
self.my_dictionary = {
'my_string_1' : my_string_1,
'my_string_2' : my_string_2
}
def run(self):
self.my_dictionary = {
'new_string' : 'Watch me make weird stuff happen!'
}
try:
while True:
print(self.my_dictionary['my_string_1'] + " " + self.my_dictionary['my_string_2'])
except (KeyboardInterrupt, SystemExit):
self.exit()
def exit(self):
# Write the relevant data to file
info_for_file = {
'my_dictionary': self.my_dictionary
}
print(info_for_file) # For easier debugging
save_file = open('save.log', 'w')
json.dump(info_for_file, save_file)
save_file.close()
# Exit
sys.exit()
if __name__ == '__main__':
strings_list = ["Hello", "World", "Ehlo", "Wrld"]
instances = []
try:
for i in range(len(strings_list) - 2):
my_string_1 = strings_list[i]
my_string_2 = strings_list[i + 1]
instance = Worker(my_string_1, my_string_2)
instances.append(instance)
instance.start()
for instance in instances:
instance.join()
except (KeyboardInterrupt, SystemExit):
for instance in instances:
instance.exit()
instance.close()
On run we get the following traceback...
Process Worker-2:
Process Worker-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "<stdin>", line 18, in run
File "<stdin>", line 18, in run
KeyError: 'my_string_1'
KeyError: 'my_string_1'
In other words, even though the key my_string_1 was explicitly added during init, the run() function is accessing a new version of self.my_dictionary which does not contain that key!
Again, this would be expected if we were dealing with a normal variable (my_dictionary instead of self.my_dictionary) but I thought that self.variables were always instance-wide...
What is going on here?
Your problem can basically be represented by the following:
class Test:
def __init__(self):
self.x = 1
def run(self):
self.x = 2
if self.x != 1:
print("self.x isn't 1!")
t = Test()
t.run()
Note what run is doing.
You overwrite your instance member self.my_dictionary with incompatible data when you write
self.my_dictionary = {
'new_string' : 'Watch me make weird stuff happen!'
}
Then try to use that incompatible data when you say
print(self.my_dictionary['my_string_1']...
It's not clear precisely what your intent is when you overwrite my_dictionary, but that's why you're getting the error. You'll need to rethink your logic.
import multiprocessing
class multiprocessing_issue:
def __init__(self):
self.test_mp()
def print_test(self):
print "TEST TEST TEST"
def test_mp(self):
p = multiprocessing.Pool(processes=4)
p.apply_async(self.print_test, args=())
print "finished"
if __name__ == '__main__':
multiprocessing_issue()
I've set up a simple test above, create a class, call apply_async with a function that should print "TEST TEST TEST". When I run this I see "finished" printed, but it never prints "TEST TEST TEST" as expected.
Can anyone see the error in this simple test case? I've set it up to reproduce the way I'm using it in my code.
Python 2.7 on Ubuntu
Modify test_mp as follows:
def test_mp(self):
p = multiprocessing.Pool(processes=4)
r = p.apply_async(self.print_test, args=())
print r.get()
and the answer will be more clear.
Traceback (most recent call last):
File "test.py", line 18, in <module>
multiprocessing_issue()
File "test.py", line 6, in __init__
self.test_mp()
File "test.py", line 14, in test_mp
print r.get()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 567, in get
raise self._value
cPickle.PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed
Instance methods cannot be serialized that easily. What the Pickle protocol does when serialising a function is simply turning it into a string.
In [1]: dumps(function)
Out[1]: 'c__main__\nfunction\np0\n.'
For a child process would be quite hard to find the right object your instance method is referring to due to separate process address spaces.
Modules such as dill are doing a better job than Pickle. Yet I would discourage you from mixing concurrency and OOP as the logic gets confusing pretty easily.
Ah, it's a problem moving the class reference between processes, if I define the method at the module level instead of the class level everything works.
import multiprocessing
class multiprocessing_issue:
def __init__(self):
self.test_mp()
def test_mp(self):
p = multiprocessing.Pool(4)
r = p.apply_async(mptest, args=())
r.get()
print "finished"
def mptest():
print "TEST TEST TEST"
if __name__ == '__main__':
multiprocessing_issue()
I am trying to create some multiprocessing code for my project. I have created a snippet of the things that I want to do. However its not working as per my expectations. Can you please let me know what is wrong with this.
from multiprocessing import Process, Pipe
import time
class A:
def __init__(self,rpipe,spipe):
print "In the function fun()"
def run(self):
print"in run method"
time.sleep(5)
message = rpipe.recv()
message = str(message).swapcase()
spipe.send(message)
workers = []
my_pipe_1 = Pipe(False)
my_pipe_2 = Pipe(False)
proc_handle = Process(target = A, args=(my_pipe_1[0], my_pipe_2[1],))
workers.append(proc_handle)
proc_handle.run()
my_pipe_1[1].send("hello")
message = my_pipe_2[0].recv()
print message
print "Back in the main function now"
The trace back displayed when i press ctrl-c:
^CTraceback (most recent call last):
File "sim.py", line 22, in <module>
message = my_pipe_2[0].recv()
KeyboardInterrupt
When I run this above code, the main process does not continue after calling "proc_handle.run". Why is this?
You've misunderstood how to use Process. You're creating a Process object, and passing it a class as target, but target is meant to be passed a callable (usually a function) that Process.run then executes. So in your case it's just instantiating A inside Process.run, and that's it.
You should instead make your A class a Process subclass, and just instantiate it directly:
#!/usr/bin/python
from multiprocessing import Process, Pipe
import time
class A(Process):
def __init__(self,rpipe,spipe):
print "In the function fun()"
super(A, self).__init__()
self.rpipe = rpipe
self.spipe = spipe
def run(self):
print"in run method"
time.sleep(5)
message = self.rpipe.recv()
message = str(message).swapcase()
self.spipe.send(message)
if __name__ == "__main__":
workers = []
my_pipe_1 = Pipe(False)
my_pipe_2 = Pipe(False)
proc_handle = A(my_pipe_1[0], my_pipe_2[1])
workers.append(proc_handle)
proc_handle.start()
my_pipe_1[1].send("hello")
message = my_pipe_2[0].recv()
print message
print "Back in the main function now"
mgilson was right, though. You should call start(), not run(), to make A.run execute in a child process.
With these changes, the program works fine for me:
dan#dantop:~> ./mult.py
In the function fun()
in run method
HELLO
Back in the main function now
Taking a stab at this one, I think it's because you're calling proc_handle.run() instead of proc_handle.start().
The former is the activity that the process is going to do -- the latter actually arranges for run to be called on a separate process. In other words, you're never forking the process, so there's no other process for my_pipe_1[1] to communicate with so it hangs.
I'm not sure why, but yesterday I was testing some multiprocessing code that I wrote and it was working fine. Then today when I checked the code again, it would give me this error:
Exception in thread Thread-5:
Traceback (most recent call last):
File "C:\Python32\lib hreading.py", line 740, in _bootstrap_inner
self.run()
File "C:\Python32\lib hreading.py", line 693, in run
self._target(*self._args, **self._kwargs)
File "C:\Python32\lib\multiprocessing\pool.py", line 342, in _handle_tasks
put(task)
File "C:\Python32\lib\multiprocessing\pool.py", line 439, in __reduce__
'pool objects cannot be passed between processes or pickled'
NotImplementedError: pool objects cannot be passed between processes or pickled
The structure of my code goes as follows:
* I have 2 modules, say A.py, and B.py.
* A.py has class defined in it called A.
* B.py similarly has class B.
* In class A I have a multiprocessing pool as one of the attributes.
* The pool is defined in A.__init__(), but used in another method - run()
* In A.run() I set some attributes of some objects of class B (which are collected in a list called objBList), and then I use pool.map(processB, objBList)
* processB() is a module function (in A.py) that receives as the only parameter (an instance of B) and calls B.runInput()
* the error happens at the pool.map() line.
basically in A.py:
class A:
def __init__(self):
self.pool = multiprocessing.Pool(7)
def run(self):
for b in objBList:
b.inputs = something
result = self.pool.map(processB, objBList)
return list(result)
def processB(objB):
objB.runInputs()
and in B.py:
class B:
def runInputs(self):
do_something()
BTW, I'm forced to use the processB() module function because of the way multiprocessing works on Windows.
Also I would like to point out that the error I am getting - that pool can't be pickled - shouldn't be referring to any part of my code, as I'm not trying to send the child processes any Pool objects.
Any ideas?
(PS: I should also mention that in between the two days that I was testing this function the computer restarted unexpectedly - possibly after installing windows updates.)
Perhaps your class B objects contain a reference to your A instance.
I'm trying to read from a thread in python as follows
import threading, time, random
var = True
class MyThread(threading.Thread):
def set_name(self, name):
self.name = name
def run(self):
global var
while var == True:
print "In mythread " + self.name
time.sleep(random.randint(2,5))
class MyReader(threading.Thread):
def run(self):
global var
while var == True:
input = raw_input("Quit?")
if input == "q":
var = False
t1 = MyThread()
t1.set_name("One")
t2 = MyReader()
t1.start()
t2.start()
However, if I enter 'q', I see the following error.
In mythread One
Quit?q
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 522, in __bootstrap_inner
self.run()
File "test.py", line 20, in run
input = raw_input("Quit?")
EOFError
In mythread One
In mythread One
How does on get user input from a thread?
Your code is a bit strange. If you are using the reader strictly to quit the program, why not have it outside the threading code entirely? It doesn't need to be in the thread, for your purposes, and won't work in the thread.
Regardless, I don't think you want to take this road. Consider this problem: multiple threads stop to ask for input simultaneously, and the user types input. To which thread should it go? I would advise restructuring the code to avoid this need.
Also all read /writes to var should locked