I'm struggling with a issue for some time now.
I'm building a little script which uses a main loop. This is a process that needs some attention from the users. The user responds on the steps and than some magic happens with use of some functions
Beside this I want to spawn another process which monitors the computer system for some specific events like pressing specif keys. If these events occur then it will launch the same functions as when the user gives in the right values.
So I need to make two processes:
-The main loop (which allows user interaction)
-The background "event scanner", which searches for specific events and then reacts on it.
I try this by launching a main loop and a daemon multiprocessing process. The problem is that when I launch the background process it starts, but after that I does not launch the main loop.
I simplified everything a little to make it more clear:
import multiprocessing, sys, time
def main_loop():
while 1:
input = input('What kind of food do you like?')
print(input)
def test():
while 1:
time.sleep(1)
print('this should run in the background')
if __name__ == '__main__':
try:
print('hello!')
mProcess = multiprocessing.Process(target=test())
mProcess.daemon = True
mProcess.start()
#after starting main loop does not start while it prints out the test loop fine.
main_loop()
except:
sys.exit(0)
You should do
mProcess = multiprocessing.Process(target=test)
instead of
mProcess = multiprocessing.Process(target=test())
Your code actually calls test in the parent process, and that call never returns.
You can use the locking synchronization to have a better control over your program's flow. Curiously, the input function raise an EOF error, but I'm sure you can find a workaround.
import multiprocessing, sys, time
def main_loop(l):
time.sleep(4)
l.acquire()
# raise an EOFError, I don't know why .
#_input = input('What kind of food do you like?')
print(" raw input at 4 sec ")
l.release()
return
def test(l):
i=0
while i<8:
time.sleep(1)
l.acquire()
print('this should run in the background : ', i+1, 'sec')
l.release()
i+=1
return
if __name__ == '__main__':
lock = multiprocessing.Lock()
#try:
print('hello!')
mProcess = multiprocessing.Process(target=test, args = (lock, ) ).start()
inputProcess = multiprocessing.Process(target=main_loop, args = (lock,)).start()
#except:
#sys.exit(0)
Related
I have a python script which calls a series of sub-processes. They need to run "for ever" - but they occasionally die, or get killed. When this happens I need to restart the process using the same arguments as the one which died.
This is a very simplified version:
[edit: this is the less simplified version, which includes "restart" code]
import multiprocessing
import time
import random
def printNumber(number):
print("starting :", number)
while random.randint(0, 5) > 0:
print(number)
time.sleep(2)
if __name__ == '__main__':
children = [] # list
args = {} # dictionary
for processNumber in range(10,15):
p = multiprocessing.Process(
target=printNumber,
args=(processNumber,)
)
children.append(p)
p.start()
args[p.pid] = processNumber
while True:
time.sleep(1)
for n, p in enumerate(children):
if not p.is_alive():
#get parameters dead child was started with
pidArgs = args[p.pid]
del(args[p.pid])
print("n,args,p: ",n,pidArgs,p)
children.pop(n)
# start new process with same args
p = multiprocessing.Process(
target=printNumber,
args=(pidArgs,)
)
children.append(p)
p.start()
args[p.pid] = pidArgs
I have updated the example to illustrate how I want the processes to be restarted if one crashes/killed/etc - keeping track of which pid was started with which args.
Is this the "best" way to do this, or is there a more "python" way of doing this?
I think I would create a separate thread for each Process and use a ProcessPoolExecutor. Executors have a useful function, submit, which returns a Future. You can wait on each Future and re-launch the Executor when the Future is done. Arguments to the function are tracked as class variables, so restarting is just a simple loop.
import threading
from concurrent.futures import ProcessPoolExecutor
import time
import random
import traceback
def printNumber(number):
print("starting :", number)
while random.randint(0, 5) > 0:
print(number)
time.sleep(2)
class KeepRunning(threading.Thread):
def __init__(self, func, *args, **kwds):
self.func = func
self.args = args
self.kwds = kwds
super().__init__()
def run(self):
while True:
with ProcessPoolExecutor(max_workers=1) as pool:
future = pool.submit(self.func, *self.args, **self.kwds)
try:
future.result()
except Exception:
traceback.print_exc()
if __name__ == '__main__':
for process_number in range(10, 15):
keep = KeepRunning(printNumber, process_number)
keep.start()
while True:
time.sleep(1)
At the end of the program is a loop to keep the main thread running. Without that, the program will attempt to exit while your Processes are still running.
For the example you provided I would just remove the exit condition from the while loop and change it to True.
As you said though the actual code is more complicated (why didn't you post that?). So if the process gets terminated by lets say an exception just put the code inside a try catch block. You can then put said block in an infinite loop.
I hope this is what you are looking for but that seems to be the right way to do it provided the goal and information you provided.
Instead of just starting the process immediately, you can save the list of processes and their arguments, and create another process that checks they are alive.
For example:
if __name__ == '__main__':
process_list = []
for processNumber in range(5):
process = multiprocessing.Process(
target=printNumber,
args=(processNumber,)
)
process_list.append((process,args))
process.start()
while True:
for running_process, process_args in process_list:
if not running_process.is_alive():
new_process = multiprocessing.Process(target=printNumber, args=(process_args))
process_list.remove(running_process, process_args) # Remove terminated process
process_list.append((new_process, process_args))
I must say that I'm not sure the best way to do it is in python, you may want to look at scheduler services like jenkins or something like that.
In this script I was looking to launch a given program and monitor it as long as the program exists. Thus, I reached the point where I got to use the threading's module Timer method for controlling a loop that writes to a file and prints out to the console a specific stat of the launched process (for this case, mspaint).
The problem arises when I'm hitting CTRL + C in the console or when I close mspaint, with the script capturing any of the 2 events only after the time defined for the interval has completely ran out. These events make the script stop.
For example, if a 20 seconds time is set for the interval, once the script has started, if at second 5 I either hit CTRL + C or close mspaint, the script will stop only after the remaining 15 seconds will have passed.
I would like for the script to stop right away when I either hit CTRL + C or close mspaint (or any other process launched through this script).
The script can be used with the following command, according to the example:
python.exe mon_tool.py -p "C:\Windows\System32\mspaint.exe" -i 20
I'd really appreciate if you could come up with a working example.
I had used python 3.10.4 and psutil 5.9.0 .
This is the code:
# mon_tool.py
import psutil, sys, os, argparse
from subprocess import Popen
from threading import Timer
debug = False
def parse_args(args):
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--path", type=str, required=True)
parser.add_argument("-i", "--interval", type=float, required=True)
return parser.parse_args(args)
def exceptionHandler(exception_type, exception, traceback, debug_hook=sys.excepthook):
'''Print user friendly error messages normally, full traceback if DEBUG on.
Adapted from http://stackoverflow.com/questions/27674602/hide-traceback-unless-a-debug-flag-is-set
'''
if debug:
print('\n*** Error:')
debug_hook(exception_type, exception, traceback)
else:
print("%s: %s" % (exception_type.__name__, exception))
sys.excepthook = exceptionHandler
def validate(data):
try:
if data.interval < 0:
raise ValueError
except ValueError:
raise ValueError(f"Time has a negative value: {data.interval}. Please use a positive value")
def main():
args = parse_args(sys.argv[1:])
validate(args)
# creates the "Process monitor data" folder in the "Documents" folder
# of the current Windows profile
default_path: str = f"{os.path.expanduser('~')}\\Documents\Process monitor data"
if not os.path.exists(default_path):
os.makedirs(default_path)
abs_path: str = f'{default_path}\data_test.txt'
print("data_test.txt can be found in: " + default_path)
# launches the provided process for the path argument, and
# it checks if the process was indeed launched
p: Popen[bytes] = Popen(args.path)
PID = p.pid
isProcess: bool = True
while isProcess:
for proc in psutil.process_iter():
if(proc.pid == PID):
isProcess = False
process_stats = psutil.Process(PID)
# creates the data_test.txt and it erases its content
with open(abs_path, 'w', newline='', encoding='utf-8') as testfile:
testfile.write("")
# loop for writing the handles count to data_test.txt, and
# for printing out the handles count to the console
def process_monitor_loop():
with open(abs_path, 'a', newline='', encoding='utf-8') as testfile:
testfile.write(f"{process_stats.num_handles()}\n")
print(process_stats.num_handles())
Timer(args.interval, process_monitor_loop).start()
process_monitor_loop()
if __name__ == '__main__':
main()
Thank you!
I think you could use python-worker (link) for the alternatives
import time
from datetime import datetime
from worker import worker, enableKeyboardInterrupt
# make sure to execute this before running the worker to enable keyboard interrupt
enableKeyboardInterrupt()
# your codes
...
# block lines with periodic check
def block_next_lines(duration):
t0 = time.time()
while time.time() - t0 <= duration:
time.sleep(0.05) # to reduce resource consumption
def main():
# your codes
...
#worker(keyboard_interrupt=True)
def process_monitor_loop():
while True:
print("hii", datetime.now().isoformat())
block_next_lines(3)
return process_monitor_loop()
if __name__ == '__main__':
main_worker = main()
main_worker.wait()
here your process_monitor_loop will be able to stop even if it's not exactly 20 sec of interval
You can try registering a signal handler for SIGINT, that way whenever the user presses Ctrl+C you can have a custom handler to clean all of your dependencies, like the interval, and exit gracefully.
See this for a simple implementation.
This is the solution for the second part of the problem, which checks if the launched process exists. If it doesn't exist, it stops the script.
This solution comes on top of the solution, for the first part of the problem, provided above by #danangjoyoo, which deals with stopping the script when CTRL + C is used.
Thank you very much once again, #danangjoyoo! :)
This is the code for the second part of the problem:
import time, psutil, sys, os
from datetime import datetime
from worker import worker, enableKeyboardInterrupt, abort_all_thread, ThreadWorkerManager
from threading import Timer
# make sure to execute this before running the worker to enable keyboard interrupt
enableKeyboardInterrupt()
# block lines with periodic check
def block_next_lines(duration):
t0 = time.time()
while time.time() - t0 <= duration:
time.sleep(0.05) # to reduce resource consumption
def main():
# launches mspaint, gets its PID and checks if it was indeed launched
path = f"C:\Windows\System32\mspaint.exe"
p = psutil.Popen(path)
PID = p.pid
isProcess: bool = True
while isProcess:
for proc in psutil.process_iter():
if(proc.pid == PID):
isProcess = False
interval = 5
global counter
counter = 0
#allows for sub_process to run only once
global run_sub_process_once
run_sub_process_once = 1
#worker(keyboard_interrupt=True)
def process_monitor_loop():
while True:
print("hii", datetime.now().isoformat())
def sub_proccess():
'''
Checks every second if the launched process still exists.
If the process doesn't exist anymore, the script will be stopped.
'''
print("Process online:", psutil.pid_exists(PID))
t = Timer(1, sub_proccess)
t.start()
global counter
counter += 1
print(counter)
# Checks if the worker thread is alive.
# If it is not alive, it will kill the thread spawned by sub_process
# hence, stopping the script.
for _, key in enumerate(ThreadWorkerManager.allWorkers):
w = ThreadWorkerManager.allWorkers[key]
if not w.is_alive:
t.cancel()
if not psutil.pid_exists(PID):
abort_all_thread()
t.cancel()
global run_sub_process_once
if run_sub_process_once:
run_sub_process_once = 0
sub_proccess()
block_next_lines(interval)
return process_monitor_loop()
if __name__ == '__main__':
main_worker = main()
main_worker.wait()
Also, I have to note that #danangjoyoo's solution comes as an alternative to signal.pause() for Windows. This only deals with CTRL + C problem part. signal.pause() works only for Unix systems. This is how it was supposed for its usage, for my case, in case it were a Unix system:
import signal, sys
from threading import Timer
def main():
def signal_handler(sig, frame):
print('\nYou pressed Ctrl+C!')
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
print('Press Ctrl+C')
def process_monitor_loop():
try:
print("hi")
except KeyboardInterrupt:
signal.pause()
Timer(10, process_monitor_loop).start()
process_monitor_loop()
if __name__ == '__main__':
main()
The code above is based on this.
I'm not too familiar with threading, and probably not using it correctly, but I have a script that runs a speedtest a few times and prints the average. I'm trying to use threading to call a function which displays something while the tests are running.
Everything works fine unless I try to put input() at the end of the script to keep the console window open. It causes the thread to run continuously.
I'm looking for some direction in terminating a thread correctly. Also open to any better ways to do this.
import speedtest, time, sys, datetime
from threading import Thread
s = speedtest.Speedtest()
best = s.get_best_server()
def downloadTest(tries):
x=0
downloadList = []
for x in range(tries):
downSpeed = (s.download()/1000000)
downloadList.append(downSpeed)
x+=1
results_dict = s.results.dict()
global download_avg, isp
download_avg = (sum(downloadList)/len(downloadList))
download_avg = round(download_avg,1)
isp = (results_dict['client']['isp'])
print("")
print(isp)
print(download_avg)
def progress():
while True:
print('~ ',end='', flush=True)
time.sleep(1)
def start():
now=(datetime.datetime.today().replace(microsecond=0))
print(now)
d = Thread(target= downloadTest, args=(3,))
d.start()
d1 = Thread(target = progress)
d1.daemon = True
d1.start()
d.join()
start()
input("Complete...") # this causes progress thread to keep running
There is no reason for your thread to exit, which is why it does not terminate. A daemon thread normally terminates when your programm (all other threads) terminate, which does not happen in this as the last input does not quit.
In general it is a good idea to make a thread stop by itself, rather than forcefully killing it, so you would generally kill this kind of thread with a flag. Try changing the segment at the end to:
killflag = False
start()
killflag = True
input("Complete...")
and update the progress method to:
def progress():
while not killflag:
print('~ ',end='', flush=True)
time.sleep(1)
Often there is a need for the program to wait for a function to complete its work. Sometimes it is opposite: there is no need for a main program to wait.
I've put a simple example. There are four buttons. Clicking each will call the same calculate() function. The only difference is the way the function is called.
"Call Directly" button calls calculate() function directly. Since there is a 'Function End' print out it is evident that the program is waiting for the calculate function to complete its job.
"Call via Threading" calls the same function this time using threading mechanism. Since the program prints out ': Function End' message immidiately after the button is presses I can conclude the program doesn't wait for calculate() function to complete. How to override this behavior? How to make program wait till calculate() function is finished?
"Call via Multiprocessing" buttons utilizes multiprocessing to call calculate() function.
Just like with threading multiprocessing doesn't wait for function completion. What statement we have to put in order to make it wait?
"Call via Subprocess" buttons doesn't do anything since I didn't figure out the way to hook subprocess to run internal script function or method. It would be interesting to see how to do it...
Example:
from PyQt4 import QtCore, QtGui
app = QtGui.QApplication(sys.argv)
def calculate(listArg=None):
print '\n\t Starting calculation...'
m=0
for i in range(50000000):
m+=i
print '\t ...calculation completed\n'
class Dialog_01(QtGui.QMainWindow):
def __init__(self):
super(Dialog_01, self).__init__()
myQWidget = QtGui.QWidget()
myBoxLayout = QtGui.QVBoxLayout()
directCall_button = QtGui.QPushButton("Call Directly")
directCall_button.clicked.connect(self.callDirectly)
myBoxLayout.addWidget(directCall_button)
Button_01 = QtGui.QPushButton("Call via Threading")
Button_01.clicked.connect(self.callUsingThreads)
myBoxLayout.addWidget(Button_01)
Button_02 = QtGui.QPushButton("Call via Multiprocessing")
Button_02.clicked.connect(self.callUsingMultiprocessing)
myBoxLayout.addWidget(Button_02)
Button_03 = QtGui.QPushButton("Call via Subprocess")
Button_03.clicked.connect(self.callUsingSubprocess)
myBoxLayout.addWidget(Button_03)
myQWidget.setLayout(myBoxLayout)
self.setCentralWidget(myQWidget)
self.setWindowTitle('Dialog 01')
def callUsingThreads(self):
print '------------------------------- callUsingThreads() ----------------------------------'
import threading
self.myEvent=threading.Event()
self.c_thread=threading.Thread(target=calculate)
self.c_thread.start()
print "\n\t\t : Function End"
def callUsingMultiprocessing(self):
print '------------------------------- callUsingMultiprocessing() ----------------------------------'
from multiprocessing import Pool
pool = Pool(processes=3)
try: pool.map_async( calculate, ['some'])
except Exception, e: print e
print "\n\t\t : Function End"
def callDirectly(self):
print '------------------------------- callDirectly() ----------------------------------'
calculate()
print "\n\t\t : Function End"
def callUsingSubprocess(self):
print '------------------------------- callUsingSubprocess() ----------------------------------'
import subprocess
print '-missing code solution'
print "\n\t\t : Function End"
if __name__ == '__main__':
dialog_1 = Dialog_01()
dialog_1.show()
dialog_1.resize(480,320)
sys.exit(app.exec_())
Use a queue: each thread when completed puts the result on the queue and then you just need to read the appropriate number of results and ignore the remainder:
#!python3.3
import queue # For Python 2.x use 'import Queue as queue'
import threading, time, random
def func(id, result_queue):
print("Thread", id)
time.sleep(random.random() * 5)
result_queue.put((id, 'done'))
def main():
q = queue.Queue()
threads = [ threading.Thread(target=func, args=(i, q)) for i in range(5) ]
for th in threads:
th.daemon = True
th.start()
result1 = q.get()
result2 = q.get()
print("Second result: {}".format(result2))
if __name__=='__main__':
main()
Documentation for Queue.get() (with no arguments it is equivalent to Queue.get(True, None):
Queue.get([block[, timeout]])
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case).
How to wait until only the first thread is finished in Python
You can to use .join() method too.
what is the use of join() in python threading
I find that using the "pool" submodule within "multiprocessing" works amazingly for executing multiple processes at once within a Python Script.
See Section: Using a pool of workers
Look carefully at "# launching multiple evaluations asynchronously may use more processes" in the example. Once you understand what those lines are doing, the following example I constructed will make a lot of sense.
import numpy as np
from multiprocessing import Pool
def desired_function(option, processes, data, etc...):
# your code will go here. option allows you to make choices within your script
# to execute desired sections of code for each pool or subprocess.
return result_array # "for example"
result_array = np.zeros("some shape") # This is normally populated by 1 loop, lets try 4.
processes = 4
pool = Pool(processes=processes)
args = (processes, data, etc...) # Arguments to be passed into desired function.
multiple_results = []
for i in range(processes): # Executes each pool w/ option (1-4 in this case).
multiple_results.append(pool.apply_async(param_process, (i+1,)+args)) # Syncs each.
results = np.array(res.get() for res in multiple_results) # Retrieves results after
# every pool is finished!
for i in range(processes):
result_array = result_array + results[i] # Combines all datasets!
The code will basically run the desired function for a set number of processes. You will have to carefully make you're function can distinguish between each process (hence why I added the variable "option".) Additionally, it doesn't have to be an array that is being populated in the end, but for my example thats how I used it. Hope this simplifies or helps you better understand the power of multiprocessing in Python!
I have a function foo that only stops once a condition has been met. While foo is running, I need to ask for user input the whole time (keeps asking for user input). I want them to run separately without interfering with each other.
In my example below, foo keeps printing 'Hello' and getUserInput keeps looking for user input. I want foo to keep printing hello even if i do not enter anything for user input. It will keep asking for input as long as the user does not enter letter 'e'. I have my attempt below:
import threading
from time import sleep
class test:
def __init__(self):
self.running = True
def foo(self):
while(self.running):
print 'Hello\n'
sleep(2)
def getUserInput(self):
x = ''
while(x != 'e'):
x = raw_input('Enter value: ')
self.running = False
def go(self):
th1 = threading.Thread(target=self.foo)
th2 = threading.Thread(target=self.getUserInput)
th1.start()
th2.start()
t = test()
t.go()
My code prints out the first hello and asks for input but nothing after that. What am I doing wrong? Thanks for your help in advance.
Update: The opener was running his code on Windows in IDLE. Regarding I/O it behaves differently than a shell or the Windows command line. His code works on the Windows command line.
In principle, your code works for me. I am running Python 2.6.5.
Several comments here:
1) In your case it would be fine to only have two threads: the main thread and another one. However, it will also work with three. It's just that your main thread does nothing else than waiting for the other threads to finish.
2) You should to explicitly join() all threads you spawn. You do this in the main thread before terminating it. Keep record of the threads you spawn (e.g. in a list threads) and then join them at the end of your program (e.g. for t in threads: t.join()).
3) You share the variable self.running between threads. It is fine in this case, as one thread only reads it and another one only writes it. In general, you need to be very careful with shared variables and acquire a lock before changing it.
4) You should catch the KeyboardInterrupt exception in the main thread and find a way to communicate to your other threads to terminate :)
5) Use lowercase method names, so instead of getUserInput call it get_user_input. Use uppercase class names and inherit from object: class Test(object):
This is a running example:
import threading
from time import sleep
def main():
t = Test()
t.go()
try:
join_threads(t.threads)
except KeyboardInterrupt:
print "\nKeyboardInterrupt catched."
print "Terminate main thread."
print "If only daemonic threads are left, terminate whole program."
class Test(object):
def __init__(self):
self.running = True
self.threads = []
def foo(self):
while(self.running):
print '\nHello\n'
sleep(2)
def get_user_input(self):
while True:
x = raw_input("Enter 'e' for exit: ")
if x.lower() == 'e':
self.running = False
break
def go(self):
t1 = threading.Thread(target=self.foo)
t2 = threading.Thread(target=self.get_user_input)
# Make threads daemonic, i.e. terminate them when main thread
# terminates. From: http://stackoverflow.com/a/3788243/145400
t1.daemon = True
t2.daemon = True
t1.start()
t2.start()
self.threads.append(t1)
self.threads.append(t2)
def join_threads(threads):
"""
Join threads in interruptable fashion.
From http://stackoverflow.com/a/9790882/145400
"""
for t in threads:
while t.isAlive():
t.join(5)
if __name__ == "__main__":
main()
When typing e or E, the program ends after a short delay (as intended by you). When pressing ctrl+c, it immediately terminates. Making a program that uses threading responsive to exceptions is a bit trickier than expected. I have included important references in the source above.
This is how it looks like during runtime:
$ python supertest.py
Hello
Enter 'e' for exit:
Hello
Hello
Hello
e
$