Shutdown Flask server from scheduled job - python

I'm running a flask server and an apscheduler background scheduler. I want the program to periodically restart itself.
I restart the application using the following shell script:
#!/bin/bash
kill $(ps aux | grep 'python' | awk '{print $2}')
sleep 5
cd "$( dirname "${BASH_SOURCE[0]}" )"
python server.py
When I start the server without this script, then execute this script from another terminal it successfully kills the other process and starts the server.
But when the scheduler calls this script, flask gives me an 'error, socket in use'. Why does executing this from another terminal shutdown the server and free the socket, but executing it from inside the scheduler leave the socket in use?
How can I make the script my scheduler runs totally shutdown my server?
1 month later and I still haven't gotten an answer to this question :(
To clarify, I do not want a workaround that makes it so I don't need to restart the server. I want the server to have the ability to restart itself.
For now I am going to use the workaround of setting the server to start on boot, and then restarting the entire computer.

Related

Safely and Asynchronously Interrupt an Infinite-Loop Python Script started by a BASH script via SSH

My Setup:
I have a Python script that I'd like to run on a remote host. I'm running a BASH script on my local machine that SSH's into my remote server, runs yet another BASH script, which then kicks off the Python script:
Local BASH script --> SSH --> Remote BASH script --> Remote Python script
The Python script configures a device (a DAQ) connected to the remote server and starts a while(True) loop of sampling and signal generation. When developing this script locally, I had relied on using Ctrl+C and a KeyboardInterrupt exception to interrupt the infinite loop and (most importantly) safely close the device sessions.
After exiting the Python script, I have my BASH script do a few additional chores while still SSH'd into the remote server.
Examples of my various scripts...
local-script.sh:
ssh user#remotehost "remote-script.sh"
remote-script.sh:
python3 infinite-loop.py
infinite-loop.py:
while(true):
# do stuff...
My Issue(s):
Now that I've migrated this script to my remote server and am running it via SSH, I can no longer use the KeyboardInterrupt to safely exit my Python script. In fact, when I do, I'll notice that the device that was being controlled by the Python script is still running (the output signals from my DAQ are changing as though the Python script is still running), and when I manually SSH back into the remote server, I can find the persisting Python script process and must kill it from there (otherwise I get two instances of the Python script running on top of one another if I run the script again). This leads me to believe that I'm actually exiting my remote-side BASH script SSH session that was kicked off by my local script and leaving my remote BASH and Python scripts off wandering on their own... (updated, following investigation outlined in the Edit 1 section)
In summary, using Ctrl+C while in the remote Python script results in:
Remote Python Script = Still Running
Remote BASH Script = Still Running
Remote SSH Session = Closed
Local BASH Script = Active ([Ctrl]+[C] lands me here)
My Ask:
How can I asynchronously interrupt (but not fully exit) a Python script that was kicked off over an SSH session via a BASH script? Bonus points if we can work within my BASH --> SSH --> BASH --> Python framework... whack as it may be. If we can do it with as few extra pip modules installed on top, you just might become my favorite person!
Edit 1:
Per #dan's recommendation, I started exploring trap statements in BASH scripts. I have yet to be successful in implementing this, but as a way to test its effectiveness, I decided to monitor process list at different stages of execution... It seems that, once started, I can see my SSH session, my remote BASH script, and its subsequent remote Python script start up processes. But, when I use Ctrl+C to exit, I'm kicked back into the top-level "Local" BASH script and, when I check the process list of my remote server, I see both the process for my remote BASH script and my remote Python script still running... so my Remote BASH script is not stopping... I'm, in fact, ONLY ending my SSH session...
In combining the suggestions from the comments (and lots of help from a buddy), I've got something that works for me:
Solution 1:
In summary, I made my remote BASH script record its Group Process ID (GPID; that which is also assigned to the Python script that is spawned by the remote BASH script) to a file, and then had the local BASH script read that file to then kill the group process remotely.
Now my scripts look like:
local-script.sh
ssh user#remotehost "remote-script.sh"
remotegpid=`ssh user#ip "cat gpid_file"`
ssh user#ip "kill -SIGTERM -- -$remotegpid && rm gpid_file"
# ^ After the SSH closes, this goes back in to grab the GPID from the file and then kills it
remote-script.sh
ps -o pgid= $$ | xargs > ~/Desktop/gpid_file
# ^ This gets the BASH script's GPID and writes it to a file without whitespace
python3 infinite-loop.py
infinite-loop.py (unchanged)
while(true):
# do stuff...
This solves only most of the problem, since, originally I had set out to be able to do things in my Python script after it was interrupted and before exiting into my BASH scripts, but it turned out I had a bigger problem to catch (what with the scripts continuing to run even after closing my SSH session)...

"nohup python3 .. &" ends once I disconnect from a server

This command
nohup python3 main.py > my_log.log 2>&1 &
will end once I disconnect from a server. That is, if I disconnect and then connect 10 seconds later, the task, or a job, will be gone.
However, if I stay on a server, it continues to work as long as it needs, with no problem.
Why? How to make it work in background even after I disconnect?
It's an AWS Debian server.
nohup only disconnects your code from the current terminal session, which will end when you exist from the server. You either have to use disown:
nohup python3 main.py > my_log.log 2>&1 & disown
or best way is to run your script in tmux or screen sessions. If you use tmux or screen you can always login back to your server, and get beck to your program.

How to close the terminal when running a tornado application?

I am running Python tornado web application from terminal by typing the command python app.py when I close the terminal, the app stops. Is there anyway I can still run the app on a port such that closing terminal wouldn't affect it? Because I don't want to keep the terminal open.
Try nohup. In your case, it should be sufficient to run:
nohup python app.py &
Take note of the number shown as result of this command; it is the pid of the process, and it is used to terminate the process itself, simply by killing it. Suppose that 3456 is the pid of the process, so this command will terminate your application:
kill -9 3456
In case you lose the pid, it can be retrieved using this command:
ps -A | grep app.py
where app.py is the Python script file (the same used in the initial nohup command).
Anyway, for further information you can take a look here.

Terminate Running Python Apps

I have a Raspberry Pi running Raspbian controlling a home automation system as part of a project for college. To control this I'm using an ASP.NET web app to fire SSH commands at the Pi to start various Python apps. I need a way to terminate another app over SSH before starting a new one.
For example:
a.py and b.py are running
User selects c.py from the web app
a.py must be stopped before starting c.py leaving b.py and c.py running.
Thanks
Jake
If you want to kill every running instance of python:
$ kill `pidof python`
If you want to kill every running instance of a specific python script:
$ kill `pidof -x myscript.py`
or
$ pkill myscript.py
or
$ killall glances
It's generally not advisable to send SIGKILL to a running program (kill -9). SIGTERM is usually sufficient unless the program is frozen. All the above commands send SIGTERM.
find the PID (Process ID) of your python script using the command (should be the second column)
ps -ef | grep python
kill <PID found previously>
kill -9 <PID found previously>
If the program is still running then it means you are using the wrong PID
Nothing can stop a kill -9 command :-)

How to keep run Python Program in Background on Raspberry Pi after terminate ssh

i need to keep run my Python Program in Background on my Raspberry Pi after i close the ssh connection, Because i need to save Phidget information on a SQL DB
I try to do this with nohup but it seems that the python Program isn't even executed.
Because when I look into the MySql DB , after doing below there nothing inserted.
I type in :
pi#raspi ~/MyProjekt $ sudo nohup python sensorReader.py &
[1] 8580
and when i try to look if this process exist whit :
ps -A| grep 8580
it returns nothing.
So do i something wrong ?
How can i run the python program after close SSH Conneciton
I would recommend running your python program in a cron reboot job.
To edit your root cronjobs use
sudo crontab -e
And add the line
#reboot sudo python full_path/MyProjekt/sensorReader.py
Then reboot your pi with:
sudo reboot
And then confirm that your process is running:
ps -aux | grep python
I don't think this is an ssh connection issue, from what you say the program seems to execute and exit. Does your .py execute in an infinite loop? Else you shouldn't expect it to stay alive.
Then, about keeping a process alive after the parent has terminated (the shell in your case), nohup is the answer, that means ignore HUP signals (those sent by a terminating parent process).
The '&' just means 'execute in background'.
The cron solution is good if your program is meant to do something periodically, but if it should stay alive waiting for some event (like listening to a socket), I would prefer to create an init scritp, so that the program is run as a demon at boot time and only in the desired runlevels.

Categories

Resources