Auto rerun of python scripts - python

seeking everyone's help on my workplace python processes
Context:
-I have some python scripts running 24/7 in Powershell to process files that it detects in my folder), and then store them in the database (MySQL).
-However sometimes the script would hang (eg. due to a new error encountered, work server crash, etc).
-It caused some downtime as only my maintenance team can reset the script and they are not always around.
Trying to do:
-Whenever it detects that the script hangs, it will auto restart the scripts.
Apologies that I do not have sample scripts as all of them are stored in my workplace. I would appreciate it if you can share with me the rough thought process or examples (on other sites) so I can 'take off' on my own. thank you :)

The following bash script might help you:
#!/bin/bash
scripts=('python3 script1.py' 'python3 script2.py' 'python3 script3.py')
# loop through each python script you want and execute
for s in "${scripts[#]}"; do
eval $s # run the script
# pick up the exit code in variabl
python_exit_status=$?
if [ "${python_exit_status}" -ne 0 ];
then
echo "Exit code not equal 0. Python script errored ... "
# here, you want to reset from the beginning. Break and rerun entire bash script?
else
echo "Exit code 0. Continue as normal ... "
# here, do nothing and allow it to continue.
fi
done
The concept is that you loop through each of your python scripts, pick up on the exit code of said script, and then have a region where the bash code goes when the exit code is not equal to zero (i.e. Python script error/hangup occurs). Not sure what you want to do there though so I did not add any more. Did you want to restart from script1 again if script2 hangs up?

Related

cron invokes bash script which invokes "script" to run python program. Next line of bash script runs immediately, before script/python completes

When I run the following bash script via cron, the last line runs before the prior line has completed. Why? And how can I enforce the order I want?
I'm convinced that cron is the culprit here somehow, but for the sake of argument, here's a dummy bash script (Obviously, this is just an illustration. In the real world I'm doing some work in the python program and then trying to copy its work product to another place after it's done.):
#!/usr/bin/env bash
cd /tmp/kier/script
script output -c "./sleeper.py; echo '...and we are done'"
echo "This is the next line after invoking script..."
...and for completeness, here's the python script, sleeper.py:
#!/usr/bin/env python3
import time
print("python program starting")
time.sleep(5)
print("python program done")
When I run the bash script from the command line all is well. Specifically, the "This is the next line..." text prints at the very end, after the 5-second sleep.
But when I run it from cron, the output comes in the wrong order (this is the email that comes to me after cron runs the job):
Script started, file is output
Script done, file is output
This is the next line after invoking script...
python program starting
python program done
...and we are done
Script started, file is output
So you can see that "This is the next line..." prints before the python script has even really started. As though it's running the python script in background or something.
I'm stumped. Why is this happening and how can I make the echo command wait until script has finished running the python program?
(Finally, Yes, I could include my extra command inside the commands I send to script, and I am actually considering that. But come on! This is nuts!)
I should follow up and share the solution I came up with. In the end I never got a good answer to WHY it is behaving this way in my (RedHat) environment, so I settled on a workaround. I...
created a sentinel file before invoking "script",
included an extra command deleting the sentinel file in the script's command text, and then
waited for the sentinel file to go away before continuing.
Like this:
sentinel=`mktemp`
script output -c "./sleeper.py; rm $sentinel"
while [ -f $sentinel ]
do
sleep 3
done
Yes, it's a hack, but I needed to move on.

Stop raspbian/debian from within python

When I am away from my mountain, I monitor my Photo-voltaic system with Raspberry Pi and a small python script to read data and send it to my web page every hour. That is launched by a electo-mechanical switch that lights up the stuff for 15 minutes. But during that period, the script may run twice, which I would like to prevent as the result is messy (lsdx.eu/GPV_ZSDX).
I want to add some line at the end of the script to stop it once it has run once and possibly stop raspbian as well for a clean exit before the power is off.
- "exit" only exits a loop but the script is still running
- of course Ctrl+C won't do as I am away;
Could not find any tip in these highly technical messages in StackOverflow or in Rasbian help either.
Any tip?
Thanks
The exit() command should exit the program (break is the statement that will exit a loop). What behavior are you seeing?
To shut down , try :
python3:
from subprocess import run
run('poweroff', shell=True)
python2:
from subprocess import call
call('poweroff')
Note: poweroff may be called shutdown on your system and might require additional command line switches to force a shutdown (if needed).
For your case, structure the python script as a function using the following construct:
def read_data():
data_reading_voodo
return message_to_be_sent
def send_message(msg):
perform_message_sending_voodo
log_message_sending_voodoo_success_or_failure
return None
if __name__ == "__main__":
msg = read_data()
send_message(msg)
Structured like this, the python script should exit after running.
Next create a shell sript like follows (assuming bash and python, but modify according to your usage)
#!/bin/bash
python -m /path/to/your/voodo/script && sudo shutdown -h 6
The sudo shudown -h 6 shuts down the raspberrypi 6 minutes after the script is run. This option helps so you have some time after startup to remove the sript if you ever want to stop the run-restart cycle.
Make the shell script executable: chmod 755 run_py_script_then_set_shutdown see man chmod for details
Now create a cronjob to run run_py_script_then_set_shutdown on startup.
crontab -e Then add the following line to your crontab
#reboot /path/to/your/shell/script
Save, reboot the pi, and you're done.
Every time the rpi starts up, the python script should run and exit. Then the rpi will shutdown 6 minutes after the python script exits.
You can (should) adjust the 6 minutes for your purposes.
Thanks for all these answers that help me learn Pyth and Deb.
I finally opted for a very simple solution at the end of the script:
import os
os.system('sudo shutdown now')
But I keep in mind these other solutions
Thanks again,
Lionel

I run a program in a shell script. How do i know if it has loaded? (Linux)

I wrote a shell script that runs a python program. Then I want to load a file into the program using xdotool. Right now this is my code:
#!/bin/bash
cd ~/Folder
python program.py &
sleep 10
WID=$(xdotool search --onlyvisible program)
....
I really don't like my solution of just waiting 10 seconds so that the program is loaded. Is there a better way?
This really depends. If program.py is supposed to finish then go on to xdotool then you might want to use && instead of &. A single & means you want the command to execute then move on to the next command as soon as possible without waiting for it to finish. A dobule && means you want to wait for the execution to be done, then only continue if you have a zero error exit. You could also just remove the & or use ; if you want to run the next command regardless of program.py's success. Here's what I'm getting at:
#!/bin/bash
cd ~/Folder
python program.py &&
WID=$(xdotool search --onlyvisible program)
...
If program.py is supposed continue run while you run the xdotool command, but you need program.py to come to some ready state before you continue, then you're right in using &, but you need to monitor the program.py somehow, and get a signal from it that it is ok to continue. An ancient way of doing this is to simply let program.py create/touch/edit a file, and when that file is detected you continue. A more advanced way would be to use network sockets or similar, but I'd advice against it if you really don't need to do anything fancy. Anyway, if you make program.py create a signal file, say /tmp/.program_signalfile, when things are ready all you have to do is:
#!/bin/bash
cd ~/Folder
python program.py &
until [ -f /tmp/.program_signal_file ]
do
sleep 1
done
WID=$(xdotool search --onlyvisible program)
rm /tmp/.program_signal_file
...
In this last solution program.py and xdotool are both running at the same time. If that's what you were looking for.

In Python: how can I get the exit status of the previous process run from Bash (i.e. "$?")?

I have a Python script that should report success or failure of the previous command. Currently, I'm doing
command && myscript "Success" || myscript "Failed"
What I would like to do is instead to be able to run the commands unlinked as in:
command; myscript
And have myscript retrieve $?, i.e. the exist status. I know I could run:
command; myscript $?
But what I'm really looking for is a Python way to retrieve $? from Python.
How can I do it?
Since this is a strange request, let me clarify where it comes from. I have a Python script that uses the pushover API to send a notification to my phone.
When I run a long process, I run it as process && notify "success" || notify "failure". But sometimes I forget to do this and just run the process. At this point, I'd like to run "notify" on the still processing command line, and have it pick up the exit status and notify me.
Of course I could also implement the pushover API call in bash, but now it's become a question of figuring out how to do it in Python.
This may not be possible, because of how a script (of any language, not just Python) gets executed. Try this: create a shell script called retcode.sh with the following contents:
#!/bin/bash
echo $?
Make it executable, then try the following at a shell prompt:
foo # Or any other non-existent command
echo $? # prints 127
foo
./retcode.sh # prints 0
I'd have to double-check this, but it seems that all scripts, not just Python scripts, are run in a separate process that doesn't have access to the exit code of the previous command run by the "parent" shell process. Which may explain why Python doesn't (as far as I can tell) give you a way to retrieve the exit code of the previous command like bash's $? — because it would always be 0, no matter what.
I believe your approach of doing command; myscript $? is the best way to achieve what you're trying to do: let the Python script know about the exit code of the previous step.
Update: After seeing your clarification of what you're trying to do, I think your best bet will be to modify your notify script just a little, so that it can take an option like -s or --status (using argparse to make parsing options easier, of course) and send a message based on the status code ("success" if the code was 0, "failure NN" if it was anything else, where NN is the actual code). Then when you type command without your little && notify "success" || notify "failure" trick, while it's still running you can type notify -s $? and get the notification you're looking for, since that will be the next thing that your shell runs after command returns.
false; export ret=$?; ./myscript2.py
myscript2.py:
#!/usr/bin/python
import os
print os.environ['ret']
Output:
1
It is clearly not possible: the exit value of a process is only accessible to its parent, and no shells I know offer an API to allow next process to retrieve it.
IMHO, what is closer to your need would be:
process
myscript $?
That way, you can do it even if you started you process without thinking about notification.
You could also make the script able to run a process get the exit code and to its notification, or (depending on options given in command line) use an exit code given as parameter. For example, you could have:
myscript process: runs process and does notifications
myscript -c $?: only does notifications
argparse module can make that easy.
But what I'm really looking for is a Python way to retrieve $? from Python.
If you ran them as separate processes, then it's clearly impossible.
An independent process cannot know how another process ended.
You can 1) store the result in a file, 2) emit something from command, and pipe it to the script 3) run the command from myscript, and so on... but you need some kind of communication.
The simplest way, of course is command; myscript $?

Showing or leaving python error messages when running them by batch

To simplify, I have batch file which runs multiple python programs:
start "01" /wait "C:\python27\python.exe" test1.py
start "02" /wait "C:\python27\python.exe" test2.py
But I found that even if test1.py is not run because of its error, it simply moves on to run test2.py.
It even just closes the window for test1.py as soon as it confronts that error, and just creates another window for test2.py
Of course, if I run test1.py separately by running
python test1.py
then it prints all error messages.
Since I have tens of python files in one batch, it becomes very hard to know which one of these caused the error, and I can't even know what's that error because I can't see the error messages.
How can I make it stop (but not closes the window) when it meets some error, and shows me the error message?
I do not know much about Python. But according to the question it outputs messages to stdout and stderr which only console applications do.
But if python.exe is indeed a console application and not a Windows (GUI) application, it is not really necessary to use start "title" /wait as this just results in calling console application python.exe in a separate command line interpreter process which is the reason why there is no output of python.exe displayed in command line interpreter process in which is executed the batch file.
I suggest to simply try:
#echo off
echo Calling Python with script test1.py.
"C:\python27\python.exe" test1.py
if errorlevel 1 pause
echo Calling Python with script test2.py.
"C:\python27\python.exe" test2.py
if errorlevel 1 pause
For error handling, see for example:
Exit codes in Python
get the exit code for python program
Testing for a Specific Error Level in Batch Files
Correct Testing Precedence of Batch File ERRORLEVELs
Windows documentation for command If
Please use the search feature of Stack Overflow explained on help page How do I search? and also WWW search engines. You surely do nothing which was not already done by other Python users as well and therefore also asked already most likely very often.
We expect that questioners try to find the solution by themselves and not asking others for making their job, see help page What topics can I ask about here?

Categories

Resources