Python: how to terminate a function of a script from another script - python

I have a script main.py which called a function fun from a library.
I want to exit only from fun continuing the script main.py, using for this purpose another script kill_fun.py.
I tried to use different bash commands (using os.system) with ps, but the pid it gives me is referred only to main.py.
Example:
-main.py
from lib import fun
if __name__ == '__main__':
try:
fun()
except:
do_something
do_something_else
-lib.py
def fun():
do_something_of_long_time
-kill_fun.py
if __name__ == '__main__':
kill_only_fun

You can do so by run fun in a different process.
from time import sleep
from multiprocessing import Process
from lib import fun
def my_fun():
tmp = 0
for i in range(1000000):
sleep(1)
tmp += 1
print('fun')
return tmp
def should_i_kill_fun():
try:
with open('./kill.txt','r') as f:
read = f.readline().strip()
#print(read)
return read == 'Y'
except Exception as e:
return False
if __name__ == '__main__':
try:
p = Process(target=my_fun, args=())
p.start()
while p.is_alive():
sleep(1)
if should_i_kill_fun():
p.terminate()
except Exception as e:
print("do sth",e)
print("do sth other thing")
to kill fun, simply echo 'Y' > kill.txt
or you can write a python script to write the file as well.
Explain
The idea is to start fun in a different process. p is a process handler that you can control. And then, we put a loop to check file kill.txt to see if kill command 'Y' is in there. If yes, then it call p.terminate(). The process will then get killed and continue to do next things.
Hope this helps.

Related

Starting a python script from another before it crashes

I'm trying to make some project code I have written, more resilient to crashes, except the circumstances of my previous crashes have all been different.
So that I do not have to try and account for every single one, I thought I'd try to get my code to either restart, or execute a copy of itself in place of it and then close itself down gracefully, meaning its replacement, because it's coded identically, would in essence be the same as restarting from the beginning again. The desired result for me would be that while the error resulting circumstances are present, my code would be in a program swap out, or restart loop until such time as it can execute its code normally again....until the next time it faces a similar situation.
To experiment with, I've written two programs. I'm hoping from these examples someone will understand what I am trying to achieve. I want the first script to execute, then start the execute process for the second (in a new terminal) before closing itself down gracefully.
Is this even possible?
Thanks in advance.
first.py
#!/usr/bin/env python
#!/bin/bash
#first.py
import time
import os
import sys
from subprocess import run
import subprocess
thisfile = "first"
#thisfile = "second"
time.sleep(3)
while thisfile == "second":
print("this is the second file")
time.sleep(1)
#os.system("first.py")
#exec(open("first.py").read())
#run("python "+"first.py", check=False)
#import first
#os.system('python first.py')
#subprocess.call(" python first.py 1", shell=True)
os.execv("first.py", sys.argv)
print("I'm leaving second now")
break
while thisfile == "first":
print("this is the first file")
time.sleep(1)
#os.system("second.py")
#exec(open("second.py").read())
#run("python "+"second.py", check=False)
#import second
#os.system('python second.py')
#subprocess.call(" python second.py 1", shell=True)
os.execv("second.py", sys.argv)
print("I'm leaving first now")
break
time.sleep(1)
sys.exit("Quitting")
second.py (basically a copy of first.py)
#!/usr/bin/env python
#!/bin/bash
#second.py
import time
import os
import sys
from subprocess import run
import subprocess
#thisfile = "first"
thisfile = "second"
time.sleep(3)
while thisfile == "second":
print("this is the second file")
time.sleep(1)
#os.system("first.py")
#exec(open("first.py").read())
#run("python "+"first.py", check=False)
#import first
#os.system('python first.py')
#subprocess.call(" python first.py 1", shell=True)
os.execv("first.py", sys.argv)
print("I'm leaving second now")
break
while thisfile == "first":
print("this is the first file")
time.sleep(1)
#os.system("second.py")
#exec(open("second.py").read())
#run("python "+"second.py", check=False)
#import second
#os.system('python second.py')
#subprocess.call(" python second.py 1", shell=True)
os.execv("second.py", sys.argv)
print("I'm leaving first now")
break
time.sleep(1)
sys.exit("Quitting")
I have tried quite a few solutions as can be seen with my hashed out lines of code. Nothing so far though has given me the result I am after unfortunately.
EDIT: This is the part of the actual code i think i am having problems with. This is the part where I am attempting to publish to my MQTT broker.
try:
client.connect(broker, port, 10) #connect to broker
time.sleep(1)
except:
print("Cannot connect")
sys.exit("Quitting")
Instead of exiting with the "quitting" part, will it keep my code alive if i route it to stay within a repeat loop until such time as it successfully connects to the broker again and then continue back with the rest of the script? Or is this wishful thinking?
You can do this in many ways. Your subprocess.call() option would work - but it depends on the details of implementation. Perhaps the easiest is to use multiprocessing to run the program in a subprocess while the parent simply restarts it as necessary.
import multiprocessing as mp
import time
def do_the_things(arg1, arg2):
print("doing the things")
time.sleep(2) # for test
raise RuntimeError("Virgin Media dun me wrong")
def launch_and_monitor():
while True:
print("start the things")
proc = mp.Process(target=do_the_things, args=(0, 1))
proc.start()
proc.wait()
print("things went awry")
time.sleep(2) # a moment before restart hoping badness resolves
if __name__ == "__main__":
launch_and_monitor()
Note: The child process uses the same terminal as the parent. Running separate terminals is quite a bit more difficult. It would depend, for instance, on how you've setup to have a terminal attach to the pi.
If you want to catch and process errors in the parent process, you could write some extra code to catch the error, pickle it, and have a queue to pass it back to the parent. Multiprocessing pools already do that, so you could just have a pool with 1 process and and a single iterable to consume.
with multiprocessing.Pool(1) as pool:
while True:
try:
result = pool.map(do_the_things, [(0,1)])
except Exception as e:
print("caught", e)
Ok, I got it!
For anyone else interested in trying to do what my original question was:
To close down a script on the occurrence of an error and then open either a new script, or a copy of the original one (for the purpose of having the same functionality as the first) in a new terminal window, this is the answer using my original code samples as an example (first.py and second.py where both scripts run the exact same code - other than defining them as different names and this name allocation defined within for which alternate file to open in its place)
first.py
import time
import subprocess
thisfile = "first"
#thisfile = "second"
if thisfile == "second":
restartcommand = 'python3 /home/mypi/myprograms/first.py'
else:
restartcommand = 'python3 /home/mypi/myprograms/second.py'
time.sleep(3)
while thisfile == "second":
print("this is the second file")
time.sleep(1)
subprocess.run('lxterminal -e ' + restartcommand, shell=True)
print("I'm leaving second now")
break
while thisfile == "first":
print("this is the first file")
time.sleep(1)
subprocess.run('lxterminal -e ' + restartcommand, shell=True)
print("I'm leaving first now")
break
time.sleep(1)
quit()
second.py
import time
import subprocess
#thisfile = "first"
thisfile = "second"
if thisfile == "second":
restartcommand = 'python3 /home/mypi/myprograms/first.py'
else:
restartcommand = 'python3 /home/mypi/myprograms/second.py'
time.sleep(3)
while thisfile == "second":
print("this is the second file")
time.sleep(1)
subprocess.run('lxterminal -e ' + restartcommand, shell=True)
print("I'm leaving second now")
break
while thisfile == "first":
print("this is the first file")
time.sleep(1)
subprocess.run('lxterminal -e ' + restartcommand, shell=True)
print("I'm leaving first now")
break
time.sleep(1)
quit()
The result of running either one of these will be that the program runs, then opens the other file and starts running that before closing down itself and this operation will continue back and forth, back and forth until you close down the running file before it gets a chance to open the other file.
Try it! it's fun!

How to stop the execution of an imported python script without exiting the python altogether?

In the example below, when I run y_file.py, I need 5 printed and Hello not printed.
How to stop the execution of an imported python script x_file.py without exiting the python altogether? sys.exit() seems to exit python altogether.
x_file.py
import sys
x = 5
if __name__ != '__main__':
pass
# stop executing x.py, but do not exit python
# sys.exit() # this line exits python
print("Hello")
y_file.py
import x_file
print(x_file.x)
As jvx8ss suggested, you can fix this by putting the print inside a if __name__ == "__main__": conditional. Note the equality "==" instead of inequality "!=".
Final code:
import sys
x = 5
if __name__ == "__main__":
# stop executing x.py, but do not exit python
# sys.exit() # this line exits python
print("Hello")
You should place your code you don't want to run in the import inside an if __name__ == "__main__" however, there is an extremely bad way to do what you want that I can think of using Exception
# x_file.py
x = 5
if __name__ != '__main__':
raise Exception(x)
print("Hello")
# y_file.py
try:
import x_file
except Exception as e:
print(e.args[0])

How could I stop the script without having to wait for the time set for interval to pass?

In this script I was looking to launch a given program and monitor it as long as the program exists. Thus, I reached the point where I got to use the threading's module Timer method for controlling a loop that writes to a file and prints out to the console a specific stat of the launched process (for this case, mspaint).
The problem arises when I'm hitting CTRL + C in the console or when I close mspaint, with the script capturing any of the 2 events only after the time defined for the interval has completely ran out. These events make the script stop.
For example, if a 20 seconds time is set for the interval, once the script has started, if at second 5 I either hit CTRL + C or close mspaint, the script will stop only after the remaining 15 seconds will have passed.
I would like for the script to stop right away when I either hit CTRL + C or close mspaint (or any other process launched through this script).
The script can be used with the following command, according to the example:
python.exe mon_tool.py -p "C:\Windows\System32\mspaint.exe" -i 20
I'd really appreciate if you could come up with a working example.
I had used python 3.10.4 and psutil 5.9.0 .
This is the code:
# mon_tool.py
import psutil, sys, os, argparse
from subprocess import Popen
from threading import Timer
debug = False
def parse_args(args):
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--path", type=str, required=True)
parser.add_argument("-i", "--interval", type=float, required=True)
return parser.parse_args(args)
def exceptionHandler(exception_type, exception, traceback, debug_hook=sys.excepthook):
'''Print user friendly error messages normally, full traceback if DEBUG on.
Adapted from http://stackoverflow.com/questions/27674602/hide-traceback-unless-a-debug-flag-is-set
'''
if debug:
print('\n*** Error:')
debug_hook(exception_type, exception, traceback)
else:
print("%s: %s" % (exception_type.__name__, exception))
sys.excepthook = exceptionHandler
def validate(data):
try:
if data.interval < 0:
raise ValueError
except ValueError:
raise ValueError(f"Time has a negative value: {data.interval}. Please use a positive value")
def main():
args = parse_args(sys.argv[1:])
validate(args)
# creates the "Process monitor data" folder in the "Documents" folder
# of the current Windows profile
default_path: str = f"{os.path.expanduser('~')}\\Documents\Process monitor data"
if not os.path.exists(default_path):
os.makedirs(default_path)
abs_path: str = f'{default_path}\data_test.txt'
print("data_test.txt can be found in: " + default_path)
# launches the provided process for the path argument, and
# it checks if the process was indeed launched
p: Popen[bytes] = Popen(args.path)
PID = p.pid
isProcess: bool = True
while isProcess:
for proc in psutil.process_iter():
if(proc.pid == PID):
isProcess = False
process_stats = psutil.Process(PID)
# creates the data_test.txt and it erases its content
with open(abs_path, 'w', newline='', encoding='utf-8') as testfile:
testfile.write("")
# loop for writing the handles count to data_test.txt, and
# for printing out the handles count to the console
def process_monitor_loop():
with open(abs_path, 'a', newline='', encoding='utf-8') as testfile:
testfile.write(f"{process_stats.num_handles()}\n")
print(process_stats.num_handles())
Timer(args.interval, process_monitor_loop).start()
process_monitor_loop()
if __name__ == '__main__':
main()
Thank you!
I think you could use python-worker (link) for the alternatives
import time
from datetime import datetime
from worker import worker, enableKeyboardInterrupt
# make sure to execute this before running the worker to enable keyboard interrupt
enableKeyboardInterrupt()
# your codes
...
# block lines with periodic check
def block_next_lines(duration):
t0 = time.time()
while time.time() - t0 <= duration:
time.sleep(0.05) # to reduce resource consumption
def main():
# your codes
...
#worker(keyboard_interrupt=True)
def process_monitor_loop():
while True:
print("hii", datetime.now().isoformat())
block_next_lines(3)
return process_monitor_loop()
if __name__ == '__main__':
main_worker = main()
main_worker.wait()
here your process_monitor_loop will be able to stop even if it's not exactly 20 sec of interval
You can try registering a signal handler for SIGINT, that way whenever the user presses Ctrl+C you can have a custom handler to clean all of your dependencies, like the interval, and exit gracefully.
See this for a simple implementation.
This is the solution for the second part of the problem, which checks if the launched process exists. If it doesn't exist, it stops the script.
This solution comes on top of the solution, for the first part of the problem, provided above by #danangjoyoo, which deals with stopping the script when CTRL + C is used.
Thank you very much once again, #danangjoyoo! :)
This is the code for the second part of the problem:
import time, psutil, sys, os
from datetime import datetime
from worker import worker, enableKeyboardInterrupt, abort_all_thread, ThreadWorkerManager
from threading import Timer
# make sure to execute this before running the worker to enable keyboard interrupt
enableKeyboardInterrupt()
# block lines with periodic check
def block_next_lines(duration):
t0 = time.time()
while time.time() - t0 <= duration:
time.sleep(0.05) # to reduce resource consumption
def main():
# launches mspaint, gets its PID and checks if it was indeed launched
path = f"C:\Windows\System32\mspaint.exe"
p = psutil.Popen(path)
PID = p.pid
isProcess: bool = True
while isProcess:
for proc in psutil.process_iter():
if(proc.pid == PID):
isProcess = False
interval = 5
global counter
counter = 0
#allows for sub_process to run only once
global run_sub_process_once
run_sub_process_once = 1
#worker(keyboard_interrupt=True)
def process_monitor_loop():
while True:
print("hii", datetime.now().isoformat())
def sub_proccess():
'''
Checks every second if the launched process still exists.
If the process doesn't exist anymore, the script will be stopped.
'''
print("Process online:", psutil.pid_exists(PID))
t = Timer(1, sub_proccess)
t.start()
global counter
counter += 1
print(counter)
# Checks if the worker thread is alive.
# If it is not alive, it will kill the thread spawned by sub_process
# hence, stopping the script.
for _, key in enumerate(ThreadWorkerManager.allWorkers):
w = ThreadWorkerManager.allWorkers[key]
if not w.is_alive:
t.cancel()
if not psutil.pid_exists(PID):
abort_all_thread()
t.cancel()
global run_sub_process_once
if run_sub_process_once:
run_sub_process_once = 0
sub_proccess()
block_next_lines(interval)
return process_monitor_loop()
if __name__ == '__main__':
main_worker = main()
main_worker.wait()
Also, I have to note that #danangjoyoo's solution comes as an alternative to signal.pause() for Windows. This only deals with CTRL + C problem part. signal.pause() works only for Unix systems. This is how it was supposed for its usage, for my case, in case it were a Unix system:
import signal, sys
from threading import Timer
def main():
def signal_handler(sig, frame):
print('\nYou pressed Ctrl+C!')
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
print('Press Ctrl+C')
def process_monitor_loop():
try:
print("hi")
except KeyboardInterrupt:
signal.pause()
Timer(10, process_monitor_loop).start()
process_monitor_loop()
if __name__ == '__main__':
main()
The code above is based on this.

Killing a process launched from a process that has ended - Python

I am trying to kill a process in Python, that is being launched from another process and I am unable to find the correct place to place my ".terminate()".
To explain myself better I will post some example code:
from multiprocessing import Process
import time
def function():
print "Here is where I am creating the function I need to kill"
ProcessToKill = Process(target = killMe)
ProcessToKill.start()
def killMe():
while True:
print "kill me"
time.sleep(0.5)
if __name__ == '__main__':
Process1 = Process(target = function)
Process1.start()
My question is, where can I place ProcessToKill.terminate(), ideally without having to change the overall structure of the code?
You can hold onto the ProcessToKill object so that you can kill it later:
from multiprocessing import Process
import time
def function():
print "Here is where I am creating the function I need to kill"
ProcessToKill = Process(target = killMe)
ProcessToKill.start()
return ProcessToKill
def killMe():
while True:
print "kill me"
time.sleep(0.5)
if __name__ == '__main__':
Process1 = function()
time.sleep(5)
Process1.terminate()
Here, I've removed your wrapping of function in another Process object, because for the example it seems redundant, but you should be able to do the same thing with a Process that runs another Process.

Multiprocessing beside a main loop

I'm struggling with a issue for some time now.
I'm building a little script which uses a main loop. This is a process that needs some attention from the users. The user responds on the steps and than some magic happens with use of some functions
Beside this I want to spawn another process which monitors the computer system for some specific events like pressing specif keys. If these events occur then it will launch the same functions as when the user gives in the right values.
So I need to make two processes:
-The main loop (which allows user interaction)
-The background "event scanner", which searches for specific events and then reacts on it.
I try this by launching a main loop and a daemon multiprocessing process. The problem is that when I launch the background process it starts, but after that I does not launch the main loop.
I simplified everything a little to make it more clear:
import multiprocessing, sys, time
def main_loop():
while 1:
input = input('What kind of food do you like?')
print(input)
def test():
while 1:
time.sleep(1)
print('this should run in the background')
if __name__ == '__main__':
try:
print('hello!')
mProcess = multiprocessing.Process(target=test())
mProcess.daemon = True
mProcess.start()
#after starting main loop does not start while it prints out the test loop fine.
main_loop()
except:
sys.exit(0)
You should do
mProcess = multiprocessing.Process(target=test)
instead of
mProcess = multiprocessing.Process(target=test())
Your code actually calls test in the parent process, and that call never returns.
You can use the locking synchronization to have a better control over your program's flow. Curiously, the input function raise an EOF error, but I'm sure you can find a workaround.
import multiprocessing, sys, time
def main_loop(l):
time.sleep(4)
l.acquire()
# raise an EOFError, I don't know why .
#_input = input('What kind of food do you like?')
print(" raw input at 4 sec ")
l.release()
return
def test(l):
i=0
while i<8:
time.sleep(1)
l.acquire()
print('this should run in the background : ', i+1, 'sec')
l.release()
i+=1
return
if __name__ == '__main__':
lock = multiprocessing.Lock()
#try:
print('hello!')
mProcess = multiprocessing.Process(target=test, args = (lock, ) ).start()
inputProcess = multiprocessing.Process(target=main_loop, args = (lock,)).start()
#except:
#sys.exit(0)

Categories

Resources