I'm trying to read from a thread in python as follows
import threading, time, random
var = True
class MyThread(threading.Thread):
def set_name(self, name):
self.name = name
def run(self):
global var
while var == True:
print "In mythread " + self.name
time.sleep(random.randint(2,5))
class MyReader(threading.Thread):
def run(self):
global var
while var == True:
input = raw_input("Quit?")
if input == "q":
var = False
t1 = MyThread()
t1.set_name("One")
t2 = MyReader()
t1.start()
t2.start()
However, if I enter 'q', I see the following error.
In mythread One
Quit?q
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 522, in __bootstrap_inner
self.run()
File "test.py", line 20, in run
input = raw_input("Quit?")
EOFError
In mythread One
In mythread One
How does on get user input from a thread?
Your code is a bit strange. If you are using the reader strictly to quit the program, why not have it outside the threading code entirely? It doesn't need to be in the thread, for your purposes, and won't work in the thread.
Regardless, I don't think you want to take this road. Consider this problem: multiple threads stop to ask for input simultaneously, and the user types input. To which thread should it go? I would advise restructuring the code to avoid this need.
Also all read /writes to var should locked
Related
I want to run 2 processes at the same time. 1 will keep printing 'a' every second and the other will ask for an input and when the input is 'Y', the first process will stop printing 'a'. I am fairly new to Python and I can't figure it out...
This is what I came up with so far:
from multiprocessing import Process
import time
go = True
def loop_a():
global go
while go == True:
time.sleep(1)
print("a")
def loop_b():
global go
text = input('Y/N?')
if text == 'Y':
go = False
if __name__ == '__main__':
Process(target=loop_a).start()
Process(target=loop_b).start()
This is the error message I'm getting:
Process Process-2:
Traceback (most recent call last):
File "C:\Users\Tip\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 249, in _bootstrap
self.run()
File "C:\Users\Tip\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "F:\ProgrammingTK\PROGproject\test.py", line 15, in loop_b
text = input('Y/N?')
EOFError: EOF when reading a line
Expanding upon jasonharper's comment as he is correct.
There are a couple issues
The go variable is not shared among the processes. As Jason suggested you can use something like Manager in multiprocessing in order to share a value among multiple processes. Technically, that go variable will be copied over into each process but it won't be shared between them so a change in one process won't be seen by the other.
Again, as he mentioned you need to pull the input(..) into the main thread of the program. Also, if you are on 2.7 you will need to use raw_input(..).
Also, if you are only checking the flag once and then exiting then you'll likely hit a BrokenPipeError.
Taking that in, you can try something like this:
from multiprocessing import Process, Manager
import time
def loop_a(go):
while True:
# run forever and print out the msg if the flag is set
time.sleep(1)
if go.value:
print("a")
if __name__ == '__main__':
# shared value flag
manager = Manager()
go_flag = manager.Value('flag', True)
# other process that is printing
Process(target=loop_a, args=(go_flag,)).start()
# normal main thread; toggle on and off the other process
while True:
text = input('Stop Y/N?')
if text == 'Y':
go_flag.value = False
print("Changed the flag {}".format(go_flag.value))
else:
go_flag.value = True
print("Changed the flag {}".format(go_flag.value))
Ok, this one has me tearing my hair out:
I have a multi-process program, with separate workers each working on a given task.
When a KeyboardInterrupt comes, I want each worker to save its internal state to a file, so it can continue where it left off next time.
HOWEVER...
It looks like the dictionary which contains information about the state is vanishing before this can happen!
How? The exit() function is accessing a more globally scoped version of the dictionary... and it turns out that the various run() (and subordinate to run()) functions have been creating their own version of the variable.
Nothing strange about that...
Except...
All of them have been using the self. keyword.
Which, if my understanding is correct, should mean they are always accessing the instance-wide version of the variable... not creating their own!
Here's a simplified version of the code:
import multiprocessing
import atexit
import signal
import sys
import json
class Worker(multiprocessing.Process):
def __init__(self, my_string_1, my_string_2):
# Inherit the __init_ from Process, very important or we will get errors
super(Worker, self).__init__()
# Make sure we know what to do when called to exit
atexit.register(self.exit)
signal.signal(signal.SIGTERM, self.exit)
self.my_dictionary = {
'my_string_1' : my_string_1,
'my_string_2' : my_string_2
}
def run(self):
self.my_dictionary = {
'new_string' : 'Watch me make weird stuff happen!'
}
try:
while True:
print(self.my_dictionary['my_string_1'] + " " + self.my_dictionary['my_string_2'])
except (KeyboardInterrupt, SystemExit):
self.exit()
def exit(self):
# Write the relevant data to file
info_for_file = {
'my_dictionary': self.my_dictionary
}
print(info_for_file) # For easier debugging
save_file = open('save.log', 'w')
json.dump(info_for_file, save_file)
save_file.close()
# Exit
sys.exit()
if __name__ == '__main__':
strings_list = ["Hello", "World", "Ehlo", "Wrld"]
instances = []
try:
for i in range(len(strings_list) - 2):
my_string_1 = strings_list[i]
my_string_2 = strings_list[i + 1]
instance = Worker(my_string_1, my_string_2)
instances.append(instance)
instance.start()
for instance in instances:
instance.join()
except (KeyboardInterrupt, SystemExit):
for instance in instances:
instance.exit()
instance.close()
On run we get the following traceback...
Process Worker-2:
Process Worker-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "<stdin>", line 18, in run
File "<stdin>", line 18, in run
KeyError: 'my_string_1'
KeyError: 'my_string_1'
In other words, even though the key my_string_1 was explicitly added during init, the run() function is accessing a new version of self.my_dictionary which does not contain that key!
Again, this would be expected if we were dealing with a normal variable (my_dictionary instead of self.my_dictionary) but I thought that self.variables were always instance-wide...
What is going on here?
Your problem can basically be represented by the following:
class Test:
def __init__(self):
self.x = 1
def run(self):
self.x = 2
if self.x != 1:
print("self.x isn't 1!")
t = Test()
t.run()
Note what run is doing.
You overwrite your instance member self.my_dictionary with incompatible data when you write
self.my_dictionary = {
'new_string' : 'Watch me make weird stuff happen!'
}
Then try to use that incompatible data when you say
print(self.my_dictionary['my_string_1']...
It's not clear precisely what your intent is when you overwrite my_dictionary, but that's why you're getting the error. You'll need to rethink your logic.
I have a very simple multi-threaded python code with two threads trying to pop and print from a queue. I use a lock to ensure mutual exclusion. Everything works fine, except:
If I import python's in-built Queue, the program exits on KeyboardInterrup from the terminal
If I define a custom class Queue(object) (internally implemented as a list), the threads keep printing to the terminal even after a KeyboardInterrupt.
Here is my code: https://ideone.com/ArTcwE (Although you cannot test KeyboardInterrupt on ideone)
PS: I've gone through Close multi threaded application with KeyboardInterrupt already. It doesn't solve my problem.
UPDATE 1: I understand (thanks to #SuperSaiyan's answer) why the threads would continue to work in scenario# 2 - the master function died before job_done could be set to True. Hence, the threads kept waiting for the signal to arrive. But what's strange is that even in scenario# 1, job_done is still False. The threads somehow get killed:
>>> execfile('threaded_queue.py')
Starting Q1Starting Q2
Q1 got 0
Q2 got 1
Q1 got 2
Q1 got 3
Traceback (most recent call last):
File "<pyshell#68>", line 1, in <module>
execfile('threaded_queue.py')
File "threaded_queue.py", line 54, in <module>
while not q.empty():
KeyboardInterrupt
>>> job_done
False
>>>
UPDATE 2: Pasting the code here for permanency:
from time import time, ctime, sleep
from threading import Thread, Lock
from Queue import Queue
class MyQueue(object):
def __init__(self):
self.store = []
def put(self, value):
self.store.append(value)
def get(self):
return self.store.pop(0)
def empty(self):
return not self.store
class SyncQueue(Thread):
__lock = Lock()
def __init__(self, name, delay, queue):
Thread.__init__(self)
self.name = name
self.delay = delay
self.queue = queue
def run(self):
print "Starting %s" % self.name
while not self.queue.empty():
with self.__lock:
print "%s got %s" % (
self.name,
self.queue.get())
sleep(self.delay)
while not job_done:
sleep(self.delay)
print "Exiting %s" % self.name
if __name__ == "__main__":
job_done = False
#q = Queue(5) # Python's Queue
q = MyQueue() # Custom Queue
for i in xrange(5):
q.put(i)
q1 = SyncQueue("Q1", .5, q)
q2 = SyncQueue("Q2", 1, q)
q1.start()
q2.start()
# Wait for the job to be done
while not q.empty():
pass
job_done = True
q1.join()
q2.join()
print "All done!"
Your problem is not your custom Queue v/s python's Queue. It is something else altogether. Further, even with python's Queue implementation you would see the same behaviour.
This is because your main thread dies when your press ctrl+C before it is able to signal the other two threads to exit (using job_done = True).
What you need is a mechanism to tell your other two threads to exit. Below is a simple mechanism -- you might need something more robust but you'd get the idea:
try:
while not job_done:
time.sleep(0.1) #Trying using this instead of CPU intensive `pass`.
except KeyboardInterrupt as e:
job_done = True
I am trying to create some multiprocessing code for my project. I have created a snippet of the things that I want to do. However its not working as per my expectations. Can you please let me know what is wrong with this.
from multiprocessing import Process, Pipe
import time
class A:
def __init__(self,rpipe,spipe):
print "In the function fun()"
def run(self):
print"in run method"
time.sleep(5)
message = rpipe.recv()
message = str(message).swapcase()
spipe.send(message)
workers = []
my_pipe_1 = Pipe(False)
my_pipe_2 = Pipe(False)
proc_handle = Process(target = A, args=(my_pipe_1[0], my_pipe_2[1],))
workers.append(proc_handle)
proc_handle.run()
my_pipe_1[1].send("hello")
message = my_pipe_2[0].recv()
print message
print "Back in the main function now"
The trace back displayed when i press ctrl-c:
^CTraceback (most recent call last):
File "sim.py", line 22, in <module>
message = my_pipe_2[0].recv()
KeyboardInterrupt
When I run this above code, the main process does not continue after calling "proc_handle.run". Why is this?
You've misunderstood how to use Process. You're creating a Process object, and passing it a class as target, but target is meant to be passed a callable (usually a function) that Process.run then executes. So in your case it's just instantiating A inside Process.run, and that's it.
You should instead make your A class a Process subclass, and just instantiate it directly:
#!/usr/bin/python
from multiprocessing import Process, Pipe
import time
class A(Process):
def __init__(self,rpipe,spipe):
print "In the function fun()"
super(A, self).__init__()
self.rpipe = rpipe
self.spipe = spipe
def run(self):
print"in run method"
time.sleep(5)
message = self.rpipe.recv()
message = str(message).swapcase()
self.spipe.send(message)
if __name__ == "__main__":
workers = []
my_pipe_1 = Pipe(False)
my_pipe_2 = Pipe(False)
proc_handle = A(my_pipe_1[0], my_pipe_2[1])
workers.append(proc_handle)
proc_handle.start()
my_pipe_1[1].send("hello")
message = my_pipe_2[0].recv()
print message
print "Back in the main function now"
mgilson was right, though. You should call start(), not run(), to make A.run execute in a child process.
With these changes, the program works fine for me:
dan#dantop:~> ./mult.py
In the function fun()
in run method
HELLO
Back in the main function now
Taking a stab at this one, I think it's because you're calling proc_handle.run() instead of proc_handle.start().
The former is the activity that the process is going to do -- the latter actually arranges for run to be called on a separate process. In other words, you're never forking the process, so there's no other process for my_pipe_1[1] to communicate with so it hangs.
I'm trying to run a file in python and inside of it is a class:
class MyClass(threading.Thread):
def __init__(self, a, b, c, d):
threading.Thread.__init__(self)
self.varA = a
self.varB = b
self.varC = c
self.varD = d
print (self)
self.run()
def run(self):
...
in my file i create several threads, but i have this treaceback:
Exception in thread (nameThread):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
TypeError: 'MyClass' object is not callable
this happends with all threads.
I'm confused
in MainThread I print after creation the state of every thread and first it's say 'started' but just after that it's say 'stopped'.
Any help is appreciated, thanks.
sorry for misspelling
long time without write in english
here is the code that start the threads:
for i in range(1,X):
print ('inside' + str(i)) for debug
nomb = 'thred' + str(i)
t = threading.Thread(target=surtidor(i, fin, estado, s), name = 'THREAD' + str(i))
hilos.append(t)
t.start()
print (hilos) # for debug
Hi again updating the situation:
Now I do what Tim Peters say, I call start() .-
Now the threads really run but first they throw the same exception, I know they run because they run a loop and in every repeat they print their names.
Any ideas why is that?
To emphasize what was already said in comments: DO NOT CALL .run(). As the docs say, .start()
arranges for the object’s run() method to be invoked
in a separate thread of control.
That's the only way .run() is intended to be used: invoked automatically by - and only by - .start().
That said, I'm afraid you haven't shown us the real cause of your trouble. You need to show the code you use to create and to start the threads. What you have shown cannot produce the error you're seeing.
You should not call run in init
Here is what I would expect the normal use case of a threading class
import threading
class MyClass(threading.Thread):
def __init__(self, a, b, c, d):
threading.Thread.__init__(self)
self.varA = a
self.varB = b
self.varC = c
self.varD = d
print (self)
# self.run()
def run(self):
print self.varA
if __name__ == '__main__':
mc = MyClass('a', 'b', 'c', 'd')
mc.start()
You should not leave out the code in def run(). It is hard to tell the exact cause of the problem