Is there a way to do this? I was thinking maybe use subprocess or multiprocessing but I am not sure how to do this?
I don't have any code for an example because it is just a general question.
http://docs.python.org/library/subprocess.html
EDIT: Probably, I missed, what you want. Well, It all I can imagine about your question.
subprocess.call(["ls","-lAd"]) # executes external program. Like system()
# Capture output. Like popen, as I remember.
subprocess.check_output(["echo", "Hello World!"])
os.fork() # Binding to fork()
class MyThread(threading.thread):
def run():
print("Hello from thread")
MyThread().start()
yes there is
python provides 2 different ways of doing this threading and multiprocessing witch one you should use depend on the operation your performing.
the main difference is that Threading executes the function in the same interpreter while multiprocessing starts a new interpreter and runs the function in that interpreter. this means that multiprocessing is genraly used when your performing cpu bound operations like adding a lot of numbers and Thread is used for iobound operations like inputs or waiting for something to happen.
threading Example:
from threading import Thread
import time
def fun(a):
global myVar
myVar = "post start" # as you can see myVar is updated and can be read by the main Thread
time.sleep(1)
f = input(a)
print(f"inputed {f}")
myVar = "preThread"
t = Thread(target=fun,
args=("plz input your message ",))
t.start() # start the thread
print("this whil run after the thread started", myVar)
t.join() # wait for thread to finisch executing
print("this whil run after the thread ended", myVar)
outputs
this whil run after the thread started post start
plz input your message k
inputed k
this whil run after the thread ended post start
if you use the multiprocessing lib it starts a new python interpreter and all values are copied into it, and print and inputs wont work
from multiprocessing import Process
import time
def fun(a):
global myVar
myVar = "post start" # as you can see myVar is updated and can be read by the main Thread
time.sleep(1)
f = input(a)
print(f"inputed {f}")
myVar = "preThread"
t = Process(target=fun,
args=("plz input your message ",))
t.start() # start the thread
print("this whill run after the thread started", myVar)
t.join() # wait for thread to finisch executing
print("this whill run after the thread ended", myVar)
outputs:
this whill run after the thread started preThread
this whill run after the thread ended preThread
if you want to know more plz read
https://docs.python.org/3/library/threading.html for thread
https://docs.python.org/3/library/multiprocessing.html for multiprocessing\
Related
I'm having a problem with a python script using threads. I could mimic the problem with the following code:
from threading import Thread
def func1():
while True:
print 'Function 1'
def main():
t = Thread(target = func1)
t.start()
for i in xrange(100000):
print 'Main'
t.stop()
print 'End'
if __name__ == '__main__':
main()
The problem is when I interrupt the script with Ctrl + C or when it reaches its end, the thread running func1() won't stop.
I can only interrupt the execution if open the terminal and killall python.
This is the first time I'm working with threads in Python. What am I doing wrong?
My approach (perhaps not the best, but it works) is:
Have a variable that the thread checks to see if it should stop.
Catch the ctrl-c in your main function.
When caught, main function sets the variable that indicates the thread should stop.
Main function calls join() on the thread to wait for it to finish.
Thread checks the variable, sees it is set, and returns (stops).
The join() returns and you allow your main function to exit.
I've been trying to write an interactive wrapper (for use in ipython) for a library that controls some hardware. Some calls are heavy on the IO so it makes sense to carry out the tasks in parallel. Using a ThreadPool (almost) works nicely:
from multiprocessing.pool import ThreadPool
class hardware():
def __init__(IPaddress):
connect_to_hardware(IPaddress)
def some_long_task_to_hardware(wtime):
wait(wtime)
result = 'blah'
return result
pool = ThreadPool(processes=4)
Threads=[]
h=[hardware(IP1),hardware(IP2),hardware(IP3),hardware(IP4)]
for tt in range(4):
task=pool.apply_async(h[tt].some_long_task_to_hardware,(1000))
threads.append(task)
alive = [True]*4
Try:
while any(alive) :
for tt in range(4): alive[tt] = not threads[tt].ready()
do_other_stuff_for_a_bit()
except:
#some command I cannot find that will stop the threads...
raise
for tt in range(4): print(threads[tt].get())
The problem comes if the user wants to stop the process or there is an IO error in do_other_stuff_for_a_bit(). Pressing Ctrl+C stops the main process but the worker threads carry on running until their current task is complete.
Is there some way to stop these threads without having to rewrite the library or have the user exit python? pool.terminate() and pool.join() that I have seen used in other examples do not seem to do the job.
The actual routine (instead of the simplified version above) uses logging and although all the worker threads are shut down at some point, I can see the processes that they started running carry on until complete (and being hardware I can see their effect by looking across the room).
This is in python 2.7.
UPDATE:
The solution seems to be to switch to using multiprocessing.Process instead of a thread pool. The test code I tried is to run foo_pulse:
class foo(object):
def foo_pulse(self,nPulse,name): #just one method of *many*
print('starting pulse for '+name)
result=[]
for ii in range(nPulse):
print('on for '+name)
time.sleep(2)
print('off for '+name)
time.sleep(2)
result.append(ii)
return result,name
If you try running this using ThreadPool then ctrl-C does not stop foo_pulse from running (even though it does kill the threads right away, the print statements keep on coming:
from multiprocessing.pool import ThreadPool
import time
def test(nPulse):
a=foo()
pool=ThreadPool(processes=4)
threads=[]
for rn in range(4) :
r=pool.apply_async(a.foo_pulse,(nPulse,'loop '+str(rn)))
threads.append(r)
alive=[True]*4
try:
while any(alive) : #wait until all threads complete
for rn in range(4):
alive[rn] = not threads[rn].ready()
time.sleep(1)
except : #stop threads if user presses ctrl-c
print('trying to stop threads')
pool.terminate()
print('stopped threads') # this line prints but output from foo_pulse carried on.
raise
else :
for t in threads : print(t.get())
However a version using multiprocessing.Process works as expected:
import multiprocessing as mp
import time
def test_pro(nPulse):
pros=[]
ans=[]
a=foo()
for rn in range(4) :
q=mp.Queue()
ans.append(q)
r=mp.Process(target=wrapper,args=(a,"foo_pulse",q),kwargs={'args':(nPulse,'loop '+str(rn))})
r.start()
pros.append(r)
try:
for p in pros : p.join()
print('all done')
except : #stop threads if user stops findRes
print('trying to stop threads')
for p in pros : p.terminate()
print('stopped threads')
else :
print('output here')
for q in ans :
print(q.get())
print('exit time')
Where I have defined a wrapper for the library foo (so that it did not need to be re-written). If the return value is not needed the neither is this wrapper :
def wrapper(a,target,q,args=(),kwargs={}):
'''Used when return value is wanted'''
q.put(getattr(a,target)(*args,**kwargs))
From the documentation I see no reason why a pool would not work (other than a bug).
This is a very interesting use of parallelism.
However, if you are using multiprocessing, the goal is to have many processes running in parallel, as opposed to one process running many threads.
Consider these few changes to implement it using multiprocessing:
You have these functions that will run in parallel:
import time
import multiprocessing as mp
def some_long_task_from_library(wtime):
time.sleep(wtime)
class MyException(Exception): pass
def do_other_stuff_for_a_bit():
time.sleep(5)
raise MyException("Something Happened...")
Let's create and start the processes, say 4:
procs = [] # this is not a Pool, it is just a way to handle the
# processes instead of calling them p1, p2, p3, p4...
for _ in range(4):
p = mp.Process(target=some_long_task_from_library, args=(1000,))
p.start()
procs.append(p)
mp.active_children() # this joins all the started processes, and runs them.
The processes are running in parallel, presumably in a separate cpu core, but that is to the OS to decide. You can check in your system monitor.
In the meantime you run a process that will break, and you want to stop the running processes, not leaving them orphan:
try:
do_other_stuff_for_a_bit()
except MyException as exc:
print(exc)
print("Now stopping all processes...")
for p in procs:
p.terminate()
print("The rest of the process will continue")
If it doesn't make sense to continue with the main process when one or all of the subprocesses have terminated, you should handle the exit of the main program.
Hope it helps, and you can adapt bits of this for your library.
In answer to the question of why pool did not work then this is due to (as quoted in the Documentation) then main needs to be importable by the child processes and due to the nature of this project interactive python is being used.
At the same time it was not clear why ThreadPool would - although the clue is right there in the name. ThreadPool creates its pool of worker processes using multiprocessing.dummy which as noted here is just a wrapper around the Threading module. Pool uses the multiprocessing.Process. This can be seen by this test:
p=ThreadPool(processes=3)
p._pool[0]
<DummyProcess(Thread23, started daemon 12345)> #no terminate() method
p=Pool(processes=3)
p._pool[0]
<Process(PoolWorker-1, started daemon)> #has handy terminate() method if needed
As threads do not have a terminate method the worker threads carry on running until they have completed their current task. Killing threads is messy (which is why I tried to use the multiprocessing module) but solutions are here.
The one warning about the solution using the above:
def wrapper(a,target,q,args=(),kwargs={}):
'''Used when return value is wanted'''
q.put(getattr(a,target)(*args,**kwargs))
is that changes to attributes inside the instance of the object are not passed back up to the main program. As an example the class foo above can also have methods such as:
def addIP(newIP):
self.hardwareIP=newIP
A call to r=mp.Process(target=a.addIP,args=(127.0.0.1)) does not update a.
The only way round this for a complex object seems to be shared memory using a custom manager which can give access to both the methods and attributes of object a For a very large complex object based on a library this may be best done using dir(foo) to populate the manager. If I can figure out how I'll update this answer with an example (for my future self as much as others).
If for some reasons using threads is preferable, we can use this.
We can send some siginal to the threads we want to terminate. The simplest siginal is global variable:
import time
from multiprocessing.pool import ThreadPool
_FINISH = False
def hang():
while True:
if _FINISH:
break
print 'hanging..'
time.sleep(10)
def main():
global _FINISH
pool = ThreadPool(processes=1)
pool.apply_async(hang)
time.sleep(10)
_FINISH = True
pool.terminate()
pool.join()
print 'main process exiting..'
if __name__ == '__main__':
main()
I have a function I'm calling every 5 seconds like such:
def check_buzz(super_buzz_words):
print 'Checking buzz'
t = Timer(5.0, check_buzz, args=(super_buzz_words,))
t.dameon = True
t.start()
buzz_word = get_buzz_word()
if buzz_word is not 'fail':
super_buzz_words.put(buzz_word)
main()
check_buzz()
I'm exiting the script by either catching a KeyboardInterrupt or by catching a System exit and calling this:
sys.exit('\nShutting Down\n')
I'm also restarting the program every so often by calling:
execv(sys.executable, [sys.executable] + sys.argv)
My question is, how do I get that timer thread to shut off? If I keyboard interrupt, the timer keeps going.
I think you just spelled daemon wrong, it should have been:
t.daemon = True
Then sys.exit() should work
Expanding on the answer from notorious.no, and the comment asking:
How can I call t.cancel() if I have no access to t oustide the
function?
Give the Timer thread a distinct name when you first create it:
import threading
def check_buzz(super_buzz_words):
print 'Checking buzz'
t = Timer(5.0, check_buzz, args=(super_buzz_words,))
t.daemon = True
t.name = "check_buzz_daemon"
t.start()
Although the local variable t soon goes out of scope, the Timer thread that t pointed to still exists and still retains the name assigned to it.
Your atexit-registered method can then identify this thread by its name and cancel it:
from atexit import register
def all_done():
for thr in threading._enumerate():
if thr.name == "check_buzz_daemon":
if thr.is_alive():
thr.cancel()
thr.join()
register(all_done)
Calling join() after calling cancel()is based on a StackOverflow answer by Cédric Julien.
HOWEVER, your thread is set to be a Daemon. According to this StackOverflow post, daemon threads do not need to be explicitly terminated.
from atexit import register
def all_done():
if t.is_alive():
# do something that will close your thread gracefully
register(all_done)
Basically when your code is about to exit, it will fire one last function and this is where you will check if your thread is still running. If it is, do something that will either cancel the transaction or otherwise exit gracefully. In general, it's best to let threads finish by themselves, but if it's not doing anything important (please note the emphasis) than you can just do t.cancel(). Design your code so that threads will finish on their own if possible.
Another way would be to use the Queue() module to send and recieve info from a thread using the .put() outside the thread and the .get() inside the thread.
What you can also do is create a txt file and make program write to it when you exit And put an if statement in the thread function to check it after each iteration (this is not a really good solution but it also works)
I would have put a code exemple but i am writing from mobile sorry
i am just a beginner in python.What i try'ed to achieve is making two threads and calling different functions in different thread.I made the function in thread 1 to execute a function for 60 seconds and thread 2 to execute simultaneously and wait the main thread to wait for 70 second.When thread one exits it should also exit the second thread and finally control should come to main thread and again the call to thread one and thread two should go and same procedure repeat.
I try'ed achieving it using the below thread but i thing i was not able to
I have made a script in which i have started two thread named thread 1 and thread 2.
In thread 1 one function will run named func1 and in thread 2 function 2 will run named func 2.
Thread 1 will execute a command and wait for 60 seconds.
Thread 2 will run only till thread 1 is running .
Again after that the same process continues in while after a break of 80 Seconds.
I am a beginner in python.
Please suggest what all i have done wrong and how to correct it.
#!/usr/bin/python
import threading
import time
import subprocess
import datetime
import os
import thread
thread.start_new_thread( print_time, (None, None))
thread.start_new_thread( print_time1, (None, None))
command= "strace -o /root/Desktop/a.txt -c ./server"
final_dir = "/root/Desktop"
exitflag = 0
# Define a function for the thread
def print_time(*args):
os.chdir(final_dir)
print "IN first thread"
proc = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.wait(70)
exitflag=1
def print_time1(*args):
print "In second thread"
global exitflag
while exitflag:
thread.exit()
#proc = subprocess.Popen(command1,shell=True,stdout=subprocess.PIPE, sterr=subprocess.PIPE)
# Create two threads as follows
try:
while (1):
t1=threading.Thread(target=print_time)
t1.start()
t2=threading.Thread(target=print_time1)
t2=start()
time.sleep(80)
z = t1.isAlive()
z1 = t2.isAlive()
if z:
z.exit()
if z1:
z1.exit()
threading.Thread(target=print_time1).start()
threading.Thread(target=print_time1).start()
print "In try"
except:
print "Error: unable to start thread"
I can't get the example to run, I need to change the function definitons to
def print_time(*args)
and the thread call to
thread.start_new_thread( print_time, (None, None))
then you have a number of problems
you are currently not waiting for the exitflag to be set in the second thread, it justs runs to completion.
to share variables between thread you need to declare them global in the thread, otherwise you get a local variable.
thread.exit() in the print_time1 function generates an error
Your timings in the problem description and in the code does not match
So, to solve issue 1-3 for print_time1 declare it like (removing exit from the end)
def print_time1(*args):
global exitflag
while exitflag == 0: # wait for print_time
next
# Do stuff when thread is finalizing
But, check the doc for the thread module (https://docs.python.org/2/library/thread.html), "[...] however, you should consider using the high-level threading module instead."
import threading
...
while(1):
threading.Thread(target=print_time).start()
threading.Thread(target=print_time1).start()
time.sleep(80)
One final tought about the code is that you should check that the threads are actually finalized before starting new ones. Right now two new threads are started every 80 sec, this is regardless of whether the old threads have run to completion or not. Unless this is the wanted behaviour I would add a check for that in the while loop. Also while you are at it, move the try clause to be as close as possible to where the exception might be raised, i.e. where the threads are created. The way you have it now with the try encapsulating a while loop is not very common and imo not very pythonic (increases complexity of code)
I'm trying to run the following code (it i simplified a bit):
def RunTests(self):
from threading import Thread
import signal
global keep_running
keep_running = True
signal.signal( signal.SIGINT, stop_running )
for i in range(0, NumThreads):
thread = Thread(target = foo)
self._threads.append(thread)
thread.start()
# wait for all threads to finish
for t in self._threads:
t.join()
def stop_running(signl, frme):
global keep_testing
keep_testing = False
print "Interrupted by the Master. Good by!"
return 0
def foo(self):
global keep_testing
while keep_testing:
DO_SOME_WORK();
I expect that the user presses Ctrl+C the program will print the good by message and interrupt. However it doesn't work. Where is the problem?
Thanks
Unlike regular processes, Python doesn't appear to handle signals in a truly asynchronous manner. The 'join()' call is somehow blocking the main thread in a manner that prevents it from responding to the signal. I'm a bit surprised by this since I don't see anything in the documentation indicating that this can/should happen. The solution, however, is simple. In your main thread, add the following loop prior to calling 'join()' on the threads:
while keep_testing:
signal.pause()
From the threading docs:
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property.
You could try setting thread.daemon = True before calling start() and see if that solves your problem.