I'm trying to make some project code I have written, more resilient to crashes, except the circumstances of my previous crashes have all been different.
So that I do not have to try and account for every single one, I thought I'd try to get my code to either restart, or execute a copy of itself in place of it and then close itself down gracefully, meaning its replacement, because it's coded identically, would in essence be the same as restarting from the beginning again. The desired result for me would be that while the error resulting circumstances are present, my code would be in a program swap out, or restart loop until such time as it can execute its code normally again....until the next time it faces a similar situation.
To experiment with, I've written two programs. I'm hoping from these examples someone will understand what I am trying to achieve. I want the first script to execute, then start the execute process for the second (in a new terminal) before closing itself down gracefully.
Is this even possible?
Thanks in advance.
first.py
#!/usr/bin/env python
#!/bin/bash
#first.py
import time
import os
import sys
from subprocess import run
import subprocess
thisfile = "first"
#thisfile = "second"
time.sleep(3)
while thisfile == "second":
print("this is the second file")
time.sleep(1)
#os.system("first.py")
#exec(open("first.py").read())
#run("python "+"first.py", check=False)
#import first
#os.system('python first.py')
#subprocess.call(" python first.py 1", shell=True)
os.execv("first.py", sys.argv)
print("I'm leaving second now")
break
while thisfile == "first":
print("this is the first file")
time.sleep(1)
#os.system("second.py")
#exec(open("second.py").read())
#run("python "+"second.py", check=False)
#import second
#os.system('python second.py')
#subprocess.call(" python second.py 1", shell=True)
os.execv("second.py", sys.argv)
print("I'm leaving first now")
break
time.sleep(1)
sys.exit("Quitting")
second.py (basically a copy of first.py)
#!/usr/bin/env python
#!/bin/bash
#second.py
import time
import os
import sys
from subprocess import run
import subprocess
#thisfile = "first"
thisfile = "second"
time.sleep(3)
while thisfile == "second":
print("this is the second file")
time.sleep(1)
#os.system("first.py")
#exec(open("first.py").read())
#run("python "+"first.py", check=False)
#import first
#os.system('python first.py')
#subprocess.call(" python first.py 1", shell=True)
os.execv("first.py", sys.argv)
print("I'm leaving second now")
break
while thisfile == "first":
print("this is the first file")
time.sleep(1)
#os.system("second.py")
#exec(open("second.py").read())
#run("python "+"second.py", check=False)
#import second
#os.system('python second.py')
#subprocess.call(" python second.py 1", shell=True)
os.execv("second.py", sys.argv)
print("I'm leaving first now")
break
time.sleep(1)
sys.exit("Quitting")
I have tried quite a few solutions as can be seen with my hashed out lines of code. Nothing so far though has given me the result I am after unfortunately.
EDIT: This is the part of the actual code i think i am having problems with. This is the part where I am attempting to publish to my MQTT broker.
try:
client.connect(broker, port, 10) #connect to broker
time.sleep(1)
except:
print("Cannot connect")
sys.exit("Quitting")
Instead of exiting with the "quitting" part, will it keep my code alive if i route it to stay within a repeat loop until such time as it successfully connects to the broker again and then continue back with the rest of the script? Or is this wishful thinking?
You can do this in many ways. Your subprocess.call() option would work - but it depends on the details of implementation. Perhaps the easiest is to use multiprocessing to run the program in a subprocess while the parent simply restarts it as necessary.
import multiprocessing as mp
import time
def do_the_things(arg1, arg2):
print("doing the things")
time.sleep(2) # for test
raise RuntimeError("Virgin Media dun me wrong")
def launch_and_monitor():
while True:
print("start the things")
proc = mp.Process(target=do_the_things, args=(0, 1))
proc.start()
proc.wait()
print("things went awry")
time.sleep(2) # a moment before restart hoping badness resolves
if __name__ == "__main__":
launch_and_monitor()
Note: The child process uses the same terminal as the parent. Running separate terminals is quite a bit more difficult. It would depend, for instance, on how you've setup to have a terminal attach to the pi.
If you want to catch and process errors in the parent process, you could write some extra code to catch the error, pickle it, and have a queue to pass it back to the parent. Multiprocessing pools already do that, so you could just have a pool with 1 process and and a single iterable to consume.
with multiprocessing.Pool(1) as pool:
while True:
try:
result = pool.map(do_the_things, [(0,1)])
except Exception as e:
print("caught", e)
Ok, I got it!
For anyone else interested in trying to do what my original question was:
To close down a script on the occurrence of an error and then open either a new script, or a copy of the original one (for the purpose of having the same functionality as the first) in a new terminal window, this is the answer using my original code samples as an example (first.py and second.py where both scripts run the exact same code - other than defining them as different names and this name allocation defined within for which alternate file to open in its place)
first.py
import time
import subprocess
thisfile = "first"
#thisfile = "second"
if thisfile == "second":
restartcommand = 'python3 /home/mypi/myprograms/first.py'
else:
restartcommand = 'python3 /home/mypi/myprograms/second.py'
time.sleep(3)
while thisfile == "second":
print("this is the second file")
time.sleep(1)
subprocess.run('lxterminal -e ' + restartcommand, shell=True)
print("I'm leaving second now")
break
while thisfile == "first":
print("this is the first file")
time.sleep(1)
subprocess.run('lxterminal -e ' + restartcommand, shell=True)
print("I'm leaving first now")
break
time.sleep(1)
quit()
second.py
import time
import subprocess
#thisfile = "first"
thisfile = "second"
if thisfile == "second":
restartcommand = 'python3 /home/mypi/myprograms/first.py'
else:
restartcommand = 'python3 /home/mypi/myprograms/second.py'
time.sleep(3)
while thisfile == "second":
print("this is the second file")
time.sleep(1)
subprocess.run('lxterminal -e ' + restartcommand, shell=True)
print("I'm leaving second now")
break
while thisfile == "first":
print("this is the first file")
time.sleep(1)
subprocess.run('lxterminal -e ' + restartcommand, shell=True)
print("I'm leaving first now")
break
time.sleep(1)
quit()
The result of running either one of these will be that the program runs, then opens the other file and starts running that before closing down itself and this operation will continue back and forth, back and forth until you close down the running file before it gets a chance to open the other file.
Try it! it's fun!
Related
I've been trying to find a good limited-input-time code for Python scripts and I finally got a code to work:
from threading import Timer
timeout = 5
t = Timer(timeout, print, ["Time's up!"])
t.start()
entry = input('> ')
t.cancel()
but, I need to be able to run a function when the timer ends.
Also - I want the function called inside of the timer code - otherwise if you type your entry before the timer runs out, the function will still be called no matter what.
Could anyone kindly edit this code I have to be able to run a function when the timer ends?
If it is fine that you block the main thread when the user has not provided any answer, the above code that you have shared might work.
Otherwise you could use msvcrt in the following sense:
import msvcrt
import time
class TimeoutExpired(Exception):
pass
def input_with_timeout(prompt, timeout, timer=time.monotonic):
sys.stdout.write(prompt)
sys.stdout.flush()
endtime = timer() + timeout
result = []
while timer() < endtime:
if msvcrt.kbhit():
result.append(msvcrt.getwche()) #XXX can it block on multibyte characters?
if result[-1] == '\n': #XXX check what Windows returns here
return ''.join(result[:-1])
time.sleep(0.04) # just to yield to other processes/threads
raise TimeoutExpired
The above code is compliant with Python3 and you will need to test it.
Reading from the Python Documentation https://docs.python.org/3/library/threading.html#timer-objects
I have come up with the following snippet which might work(Try running in your command line prompt)
from threading import Timer
def input_with_timeout(x):
def time_up():
answer= None
print('time up...')
t = Timer(x,time_up) # x is amount of time in seconds
t.start()
try:
answer = input("enter answer : ")
except Exception:
print('pass\n')
answer = None
if answer != True: # it means if variable has something
t.cancel() # time_up will not execute(so, no skip)
input_with_timeout(5) # try this for five seconds
I'm working on a Python launcher which should execute a few programs in my list by calling subprocess. The code is correct, but it works very strangely.
In short, it doesn't work without some sleep or input command in main.
Here is the example:
import threading
import subprocess
import time
def executeFile(file_path):
subprocess.call(file_path, shell=True)
def main():
file = None
try:
file = open('./config.ini', 'r');
except:
# TODO: add alert widget
print("cant find a file")
pathes = [ path.strip() for path in file.readlines() ]
try:
for idx in range(len(pathes)):
print(pathes[idx])
file_path = pathes[idx];
newThread = threading.Thread(target=executeFile, args=(file_path,))
newThread.daemon = True
newThread.start()
except:
print("cant start thread")
if __name__ == '__main__':
main()
# IT WORKS WHEN SLEEP EXISTS
time.sleep(10)
# OR
# input("Press enter to exit ;)")
but without input or sleep it doesn't work:
if __name__ == '__main__':
# Doesn't work
main()
Could someone explain me, please, why it happens?
I have some idea but I'm not sure. Maybe it's because subprocess is asynchronyous and the program executes and closes itself BEFORE the subprocess execution.
In case of sleep and input, the program suspends and subprocess has enough time to execute.
Thanks for any help!
As soon as the last thread is started, your main() returns. That in turn will exit your Python program. That stops all your threads.
From the documentation on daemon threads:
Note: Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an Event.
The simple fix would be to not use daemon threads.
As an aside, I would suggest some changes to your loop. First, iterate over pathes directly instead of using indices. Second; catch errors for each thread seperately, so one error doesn't leave remaining files unprocessed.
for path in pathes:
try:
print(path)
newThread = threading.Thread(target=executeFile, args=(path,))
newThread.start()
except:
print("cant start thread for", path)
Another option would be to skip threads entirely, and just maintain a list of running subprocesses:
import os
import subprocess
import time
def manageprocs(proclist):
"""Check a list of subprocesses for processes that have
ended and remove them from the list.
:param proclist: list of Popen objects
"""
for pr in proclist:
if pr.poll() is not None:
proclist.remove(pr)
# since manageprocs is called from a loop,
# keep CPU usage down.
time.sleep(0.5)
def main():
# Read config file
try:
with open('./config.ini', 'r') as f:
pathes = [path.strip() for path in f.readlines()]
except FileNotFoundError:
print("cant find config file")
exit(1)
# List of subprocesses
procs = []
# Do not launch more processes concurrently than your
# CPU has cores. That will only lead to the processes
# fighting over CPU resources.
maxprocs = os.cpu_count()
# Launch all subprocesses.
for path in pathes:
while len(procs) == maxprocs:
manageprocs(procs)
procs.append(subprocess.Popen(path, shell=True))
# Wait for all subprocesses to finish.
while len(procs) > 0:
manageprocs(procs)
if __name__ == '__main__':
main()
I have a script main.py which called a function fun from a library.
I want to exit only from fun continuing the script main.py, using for this purpose another script kill_fun.py.
I tried to use different bash commands (using os.system) with ps, but the pid it gives me is referred only to main.py.
Example:
-main.py
from lib import fun
if __name__ == '__main__':
try:
fun()
except:
do_something
do_something_else
-lib.py
def fun():
do_something_of_long_time
-kill_fun.py
if __name__ == '__main__':
kill_only_fun
You can do so by run fun in a different process.
from time import sleep
from multiprocessing import Process
from lib import fun
def my_fun():
tmp = 0
for i in range(1000000):
sleep(1)
tmp += 1
print('fun')
return tmp
def should_i_kill_fun():
try:
with open('./kill.txt','r') as f:
read = f.readline().strip()
#print(read)
return read == 'Y'
except Exception as e:
return False
if __name__ == '__main__':
try:
p = Process(target=my_fun, args=())
p.start()
while p.is_alive():
sleep(1)
if should_i_kill_fun():
p.terminate()
except Exception as e:
print("do sth",e)
print("do sth other thing")
to kill fun, simply echo 'Y' > kill.txt
or you can write a python script to write the file as well.
Explain
The idea is to start fun in a different process. p is a process handler that you can control. And then, we put a loop to check file kill.txt to see if kill command 'Y' is in there. If yes, then it call p.terminate(). The process will then get killed and continue to do next things.
Hope this helps.
I am compiling my Python script into a Windows Executable. The script simply downloads a a files and saves them locally - each download uses a different thread. I am finding that my simple application exits before any of the threads finish. But I am not entirely sure?
Does my script below exit before the threads finish or does the script wait till they are done? AND If the script does exit before the threads finish - How can I stop this?
Whats they standard practice to avoid this? Should I use a while loop that checks if any threads are still alive or is there a standard way of doing this?
import thread
import threading
import urllib2
def download_file():
response = urllib2.urlopen("http://website.com/file.f")
print "Res: " + str(response.read())
raw_input("Press any key to exit...")
def main():
# create thread and run
#thread.start_new_thread (run_thread, tuple())
t = threading.Thread(target=download_file)
t.start()
if __name__ == "__main__":
main()
# The below prints before "Res: ..." which makes me think the script exits before the thread has completed
print("script exit")
What you are looking for is the join() function on your newly created thread, which will block the execution of code until the thread is done. I took the liberty of removing your def main() as it is completely not needed here and only creates confusion.
If you want to wrap the launch of all downloads into a neat function, then pick a descriptive name for it.
import thread
import threading
import urllib2
def download_file():
response = urllib2.urlopen("http://website.com/file.f")
print "Res: " + str(response.read())
raw_input("Press any key to exit...")
if __name__ == "__main__":
t = threading.Thread(target=download_file)
t.start()
t.join()
# The below prints before "Res: ..." which makes me think the script exits before the thread has completed
print("script exit")
I'm struggling with a issue for some time now.
I'm building a little script which uses a main loop. This is a process that needs some attention from the users. The user responds on the steps and than some magic happens with use of some functions
Beside this I want to spawn another process which monitors the computer system for some specific events like pressing specif keys. If these events occur then it will launch the same functions as when the user gives in the right values.
So I need to make two processes:
-The main loop (which allows user interaction)
-The background "event scanner", which searches for specific events and then reacts on it.
I try this by launching a main loop and a daemon multiprocessing process. The problem is that when I launch the background process it starts, but after that I does not launch the main loop.
I simplified everything a little to make it more clear:
import multiprocessing, sys, time
def main_loop():
while 1:
input = input('What kind of food do you like?')
print(input)
def test():
while 1:
time.sleep(1)
print('this should run in the background')
if __name__ == '__main__':
try:
print('hello!')
mProcess = multiprocessing.Process(target=test())
mProcess.daemon = True
mProcess.start()
#after starting main loop does not start while it prints out the test loop fine.
main_loop()
except:
sys.exit(0)
You should do
mProcess = multiprocessing.Process(target=test)
instead of
mProcess = multiprocessing.Process(target=test())
Your code actually calls test in the parent process, and that call never returns.
You can use the locking synchronization to have a better control over your program's flow. Curiously, the input function raise an EOF error, but I'm sure you can find a workaround.
import multiprocessing, sys, time
def main_loop(l):
time.sleep(4)
l.acquire()
# raise an EOFError, I don't know why .
#_input = input('What kind of food do you like?')
print(" raw input at 4 sec ")
l.release()
return
def test(l):
i=0
while i<8:
time.sleep(1)
l.acquire()
print('this should run in the background : ', i+1, 'sec')
l.release()
i+=1
return
if __name__ == '__main__':
lock = multiprocessing.Lock()
#try:
print('hello!')
mProcess = multiprocessing.Process(target=test, args = (lock, ) ).start()
inputProcess = multiprocessing.Process(target=main_loop, args = (lock,)).start()
#except:
#sys.exit(0)