I have 3-4 python scripts which I want to run continuously on a remote server(which i am accessing through ssh login).The script might crash due to exceptions(handling all the exceptions is not practical now), and in such cases i want to immediately restart the script.
What i have done till now.
I have written a .conf file in /etc/init
chdir /home/user/Desktop/myFolder
exec python myFile.py
respawn
This seemed to work fine for about 4 hours and then it stopped working and i could not start the .conf file.
Suggest changes to this or i am also open to a new approach
Easiest way to do it - run in infinite bash loop in screen. Also it's the worst way to do it:
screen -S sessionName bash -c 'while true; python myFile.py ; done'
You can also use http://supervisord.org/ or daemon by writing init.d script for it http://www.linux.com/learn/tutorials/442412-managing-linux-daemons-with-init-scripts
If your script is running on an Ubuntu machine you have the very convenient Upstart, http://upstart.ubuntu.com/
Upstart does a great job running services on boot and respawning a process that died.
Related
My Setup:
I have a Python script that I'd like to run on a remote host. I'm running a BASH script on my local machine that SSH's into my remote server, runs yet another BASH script, which then kicks off the Python script:
Local BASH script --> SSH --> Remote BASH script --> Remote Python script
The Python script configures a device (a DAQ) connected to the remote server and starts a while(True) loop of sampling and signal generation. When developing this script locally, I had relied on using Ctrl+C and a KeyboardInterrupt exception to interrupt the infinite loop and (most importantly) safely close the device sessions.
After exiting the Python script, I have my BASH script do a few additional chores while still SSH'd into the remote server.
Examples of my various scripts...
local-script.sh:
ssh user#remotehost "remote-script.sh"
remote-script.sh:
python3 infinite-loop.py
infinite-loop.py:
while(true):
# do stuff...
My Issue(s):
Now that I've migrated this script to my remote server and am running it via SSH, I can no longer use the KeyboardInterrupt to safely exit my Python script. In fact, when I do, I'll notice that the device that was being controlled by the Python script is still running (the output signals from my DAQ are changing as though the Python script is still running), and when I manually SSH back into the remote server, I can find the persisting Python script process and must kill it from there (otherwise I get two instances of the Python script running on top of one another if I run the script again). This leads me to believe that I'm actually exiting my remote-side BASH script SSH session that was kicked off by my local script and leaving my remote BASH and Python scripts off wandering on their own... (updated, following investigation outlined in the Edit 1 section)
In summary, using Ctrl+C while in the remote Python script results in:
Remote Python Script = Still Running
Remote BASH Script = Still Running
Remote SSH Session = Closed
Local BASH Script = Active ([Ctrl]+[C] lands me here)
My Ask:
How can I asynchronously interrupt (but not fully exit) a Python script that was kicked off over an SSH session via a BASH script? Bonus points if we can work within my BASH --> SSH --> BASH --> Python framework... whack as it may be. If we can do it with as few extra pip modules installed on top, you just might become my favorite person!
Edit 1:
Per #dan's recommendation, I started exploring trap statements in BASH scripts. I have yet to be successful in implementing this, but as a way to test its effectiveness, I decided to monitor process list at different stages of execution... It seems that, once started, I can see my SSH session, my remote BASH script, and its subsequent remote Python script start up processes. But, when I use Ctrl+C to exit, I'm kicked back into the top-level "Local" BASH script and, when I check the process list of my remote server, I see both the process for my remote BASH script and my remote Python script still running... so my Remote BASH script is not stopping... I'm, in fact, ONLY ending my SSH session...
In combining the suggestions from the comments (and lots of help from a buddy), I've got something that works for me:
Solution 1:
In summary, I made my remote BASH script record its Group Process ID (GPID; that which is also assigned to the Python script that is spawned by the remote BASH script) to a file, and then had the local BASH script read that file to then kill the group process remotely.
Now my scripts look like:
local-script.sh
ssh user#remotehost "remote-script.sh"
remotegpid=`ssh user#ip "cat gpid_file"`
ssh user#ip "kill -SIGTERM -- -$remotegpid && rm gpid_file"
# ^ After the SSH closes, this goes back in to grab the GPID from the file and then kills it
remote-script.sh
ps -o pgid= $$ | xargs > ~/Desktop/gpid_file
# ^ This gets the BASH script's GPID and writes it to a file without whitespace
python3 infinite-loop.py
infinite-loop.py (unchanged)
while(true):
# do stuff...
This solves only most of the problem, since, originally I had set out to be able to do things in my Python script after it was interrupted and before exiting into my BASH scripts, but it turned out I had a bigger problem to catch (what with the scripts continuing to run even after closing my SSH session)...
I know this is an exact copy of this question, but I've been trying different solutions for a while and didn't come up with anything.
I have this simple script that uses PRAW to find posts on Reddit. It takes a while, so I need it to stay alive when I log out of the shell as well.
I tried to set it up as a start-up script, to use nohup in order to run it in the background, but none of this worked. I followed the quickstart and I can get the hello word app to run, but all these examples are for web applications and all I want is start a process on my VM and keep it running when I'm not connected, without using .yaml configuration files and such. Can somebody please point me in the right direction?
Well, at the end using nohup was the answer. I'm new to the GNU environment and I just assumed it didn't work when I first tried. My program was exiting with an error, but I didn't check the nohup.out file so I was unaware of it..
Anyway here is a detailed guide for future reference (Using Debian Stretch):
Make your script an executable
chmod +x myscript.py
Run the nohup command to execute the script in the background. The & option ensures that the process stays alive after exiting. I've added the shebang line to my python script so there's no need to call python here
nohup /path/to/script/myscript.py &
Logout from the shell if you want
logout
Done! Now your script is up and running. You can login back and make sure that your process is still alive by checking the output of this command:
ps -e | grep myscript.py
I am currently using linux. I have a python script which I want to run as a background service such as the script should start to run when I start my machine.
Currently I am using python 2.7 and the command 'python myscripy.py' to run the script.
Can anyone give an idea about how to do this.
Thank you.
It depends on where in the startup process you want your script to run. If you want your script to start up during the init process, then you can incorporate it into the init scripts in /etc/init.d/ The details will depend on what init system your system is running. You might be on a system V init (https://en.wikipedia.org/wiki/Init) or on systemd (https://wiki.debian.org/systemd), or possibly some other one. If you don't need your script to run at the system level, then you could kick the script off when you log in. To do that, you'd put it in ~/.profile if you log in using a terminal. Or, if you use a desktop environment, then you're going to be doing something in ~/.local/XSession (if I recall correctly). Different desktop environments are going to have different ways to specify what happens when a user logs in.
Hope this helps! Maybe clarify your needs if you want more detail.
You can create init script in /etc/init/ directory
Example:
start on runlevel [2345]
stop on runlevel [!2345]
kill timeout 5
respawn
script
exec /usr/bin/python /path/to/script.py
end script
Save with .conf extension
I created a Daemon process with these liblinktosite
I connect trough ssh and start the process with python myDaemon.py start.
I use a loop within the daemon method to do my tasks. But as soon as I logout the daemon stops(dies).
Does this happen because I save the PID file on my user and not in the root folder?
Anyone a idea. I can deliver code but now on Thread creation.(+3h)
Use the shebang line in your python script. Make it executable using the command.
chmod +x test.py
Use no hangup to run a program in background even if you close your terminal.
nohup /path/to/test.py &
Do not forget to use & to put it in background.
To see the process again, use in terminal,
ps ax | grep test.py
Answer
Another way would be actually make it an upstart script
I've setup an Amazon EC2 server. I have a Python script that is supposed to download large amounts of data from the web onto the server. I can run the script from the terminal through ssh, however very often I loose the ssh connection. When I loose the connection, the script stops.
Is there a method where I tell the script to run from terminal and when I disconnect, the script is still running on the server?
You have a few options.
You can add your script to cron to be run regularly.
You can run your script manually, and detach+background it using nohup.
You can run a tool such as GNU Screen, and detach your terminal and log out, only to continue where you left off later. I use this a lot.
For example:
Log in to your machine, run: screen.
Start your script and either just close your terminal or properly detach your session with: Ctrl+A, D, D.
Disconnect from your terminal.
Reconnect at some later time, and run screen -rD. You should see your stuff just as you left it.
You can also add your script to /etc/rc.d/ to be invoked on book and always be running.
You can also use nohup to make your script run in the background or when you have disconnected from your session:
nohup script.py &
The & at the end of the command explicitly tells nohup to run your script in the background.
If it just a utility you run ad-hoc, not a service daemon of some kind, i would just run it in screen. Than you can disconnect if you want and open the terminal back up later... Or reconnect the terminal if you get disconnected. It should be in your linux distros package manager. Just search for screen
http://www.gnu.org/software/screen/
nohup runs the given command with hangup signals ignored, so that the command can continue running in the background after you log out.
Syntax:
nohup Command [Arg]...
Example:
nohup example.py
nohup rasa run
Also, you can run scripts continuously using the cron command.
For more:
https://ss64.com/bash/nohup.html
https://opensource.com/article/17/11/how-use-cron-linux