Debug long running python script that crashes? - python

I have a complex python 2.7 script that is launched by a node.js app using spawn. This script runs fine most of the time, but occasionally, it crashes and I don't know why. I did not have this problem when launching the script from the command line or as a service.
To simulate this, I start the script via node.js, get the process id using the ps or pgrep command, then kill it using kill -9 on the process id. My question is how do my script to generate an error code or debug output when it crashes unexpectedly?

Related

Jenkins run a script in background

How can I trigger a script say A (in python) using Jenkins such that a shell script triggered internally from script A keeps running in background even after Jenkins build is done.
Right now, what I observe is that as soon as Jenkins job ends, it kills the background shell script too.
However, running the python script manually on the terminal is fine.
Is there a way that I can skip killing that background shell script from Jenkins?
After searching for the solution, I came across this link "Spawning process from build"
https://wiki.jenkins.io/display/JENKINS/Spawning+processes+from+build
Adding below command to build step helped:
BUILD_ID=dontKillMe nohup shell_script_to_run.sh &

Executing python script without being connected to server

I need to execute python script on remote server (access through puTTY), but I don't have a stable Internet connection, and every time I execute the script I get problems after several minutes due to my Internet getting disconnected.
How do I remotely execute the script without being connected to server?
(e.g. I connect to server, run script, and can logout while executing)
You can use a Linux Screen, it opens a background terminal and keeps a shell active even through network disruptions.
Open the screen typing in your terminal $ screen and execute there your script, even if you lose connection it won't kill the process.
Here you will find a well explained How to for this program. I use it for my regular day working on remote.
try this
nohup your_script >/dev/null 2>&1 &
program will be running in background

python run shell script that spawns detached child processes

Updated post:
I have a python web application running on a port. It is used to monitor some other processes and one of its features is to allow users to restart his own processes. The restart is done through invoking a bash script, which will proceed to restart those processes and run them in the background.
The problem is, whenever I kill off the python web application after I have used it to restart any user's processes, those processes will take take over the port used by the python web application in a round-robin fashion, so I am unable to restart the python web application due to the port being bounded. As a result, I must kill off the processes involved in the restart until nothing occupies the port the python web application uses.
Everything is ok except for those processes occupying the port. That is really undesirable.
Processes that may be restarted:
redis-server
newrelic-admin run-program (which spawns another web application)
a python worker process
UPDATE (6 June 2013): I have managed to solve this problem. Look at my answer below.
Original Post:
I have a python web application running on a port. This python program has a function that calls a bash script. The bash script spawns a few background processes, then exits.
The problem is, whenever I kill the python program, the background processes spawned by the bash script will take over and occupy that same port.
Specifically the subprocesses are:
a redis server (with daemonize = true in the configuration file)
newrelic-admin run-program (spawns a web application)
a python worker process
Update 2: I've tried running these with nohup. Only the python worker process doesnt attempt to take over the port after I kill the python web application. The redis server and newrelic-admin still do.
I observed this problem when I was using subprocess.call in the python program to run the bash script. I've tried a double fork method in the python program before running the bash script, but it results in the same problem.
How can I prevent any processes spawned from the bash script from taking over the port?
Thank you.
Update: My intention is that, those processes spawned by the bash script should continue running if the python application is killed off. Currently, they do continue running after I kill off the python application. The problem is, when I kill off the python application, the processes spawned by the bash script start to take over the port in a round-robin fashion.
Update 3: Based on the output I see from 'pstree' and 'ps -axf', processes 1 and 2 (the redis server and the web app spawned by newrelic-admin run-program) are not child processes of the python web application. This makes it even weirder that they take over the port that the python web application occupies when I kill it... Anyone knows why?
Just some background on the methods I've tried to solve my above problem, before I go on to the answer proper:
subprocess.call
subprocess.Popen
execve
the double fork method along with one of the above (http://code.activestate.com/recipes/278731-creating-a-daemon-the-python-way/)
By the way, none of the above worked for me. Whenever I killed off the web application that executes the bash script (which in turns spawns some background processes we shall denote as Q now), the processes in Q will in a round-robin fashion, take over the port occupied by the web application, so I had to kill them one by one before I could restart my web application.
After many days of living with this problem and moving on to other parts of my project, I thought of some random Stack Overflow posts and other articles on the Internet and from my own experience, recalled my experience of ssh'ing into a remote and starting a detached screen session, then logging out, and logging in again some time later to discover the screen session still alive.
So I thought, hey, what the heck, nothing works so far, so I might as well try using screen to see if it can solve my problem. And to my great surprise and joy it does! So I am posting this solution hopefully to help those who are facing the same issue.
In the bash script, I simply started the processes using a named screen process. For instance, for the redis application, I might start it like this:
screen -dmS redisScreenName redis-server redis.conf
So those processes will keep running on those detached screen sessions they were started with. In this case, I did not daemonize the redis process.
To kill the screen process, I used:
screen -S redisScreenName -X quit
However, this does not kill the redis-server. So I had to kill it separately.
Now, in the python web application, I can just use subprocess.call to execute the bash script, which will spawn detached screen sessions (using 'screen -dmS') which run the processes I want to spawn. And when I kill off the python web application, none of the spawned processes take over its port. Everything works smoothly.

Launch Application and execute script lines from python

I want to write a python script which can launch an application.The application being launched can also read python commands which I am passing through another script.
The problem I am facing is that I need to use two python scripts, one to launch an application and second one to run commands in launched application.
Can I achieve this using a single script? How do I tell python to run next few lines of script in launched application?
In general, you use subprocess.Popen to launch a command from python. If you set it as non-blocking, it'll let you keep running python statements. You also have access to the running subprocesses stdin and stdout so you can interact with the running application.
If I understand what you're asking, it'd look something like this:
import subprocess
app = subprocess.Popen(["/path/to/app", "-and", "args"], stdin=subprocess.PIPE)
app.stdin.write("python command\n")

How to continuously run a Python script on an EC2 server?

I've setup an Amazon EC2 server. I have a Python script that is supposed to download large amounts of data from the web onto the server. I can run the script from the terminal through ssh, however very often I loose the ssh connection. When I loose the connection, the script stops.
Is there a method where I tell the script to run from terminal and when I disconnect, the script is still running on the server?
You have a few options.
You can add your script to cron to be run regularly.
You can run your script manually, and detach+background it using nohup.
You can run a tool such as GNU Screen, and detach your terminal and log out, only to continue where you left off later. I use this a lot.
For example:
Log in to your machine, run: screen.
Start your script and either just close your terminal or properly detach your session with: Ctrl+A, D, D.
Disconnect from your terminal.
Reconnect at some later time, and run screen -rD. You should see your stuff just as you left it.
You can also add your script to /etc/rc.d/ to be invoked on book and always be running.
You can also use nohup to make your script run in the background or when you have disconnected from your session:
nohup script.py &
The & at the end of the command explicitly tells nohup to run your script in the background.
If it just a utility you run ad-hoc, not a service daemon of some kind, i would just run it in screen. Than you can disconnect if you want and open the terminal back up later... Or reconnect the terminal if you get disconnected. It should be in your linux distros package manager. Just search for screen
http://www.gnu.org/software/screen/
nohup runs the given command with hangup signals ignored, so that the command can continue running in the background after you log out.
Syntax:
nohup Command [Arg]...
Example:
nohup example.py
nohup rasa run
Also, you can run scripts continuously using the cron command.
For more:
https://ss64.com/bash/nohup.html
https://opensource.com/article/17/11/how-use-cron-linux

Categories

Resources