How to Kill all Celery background proces - python

Killing all celery processes involves 'grep'ing on 'ps' command and run kill command on all PID. Greping
on ps command results in showing up self process info of grep command. In order to skip that PID, 'grep
-v grep' command is piplined which executes 'not' condition on 'grep' search key. The 'awk' command is
used to filter only 2nd and 'tr' command translates result of 'awk' command output from rows to columns.
Piplining 'kill' command did not work and so the entire process has been set as command substitution, i.e.,
'$()'. The redirection at the end is not mandatory. Its only to supress the enitre output into background.
kill -9 $(ps aux | grep celery | grep -v grep | awk '{print $2}' | tr '\n' ' ') > /dev/null 2>&1

Apart from the standard unix ways of killing celery processes, Celery also provides API to kill all the workers listening on a particular broker.
To kill using python. You can refer to the docs here or here. Former calls the latter function internally.
app.control.shutdown()
where app is the celery app instance configured with the broker.
Celery command line interface can also be used for the same.
celery -A app_name control shutdown

Related

How to restart specific running python file Ubuntu

I have running python file "cepusender/main.py" (and another python files). How can I restart/kill only main.py file?
Here's a way (there are many):
ps -ef | grep 'cepusender/main.py' | grep -v grep | awk '{print $2}' | xargs kill
ps is the process snapshot command. -e prints every process on the system, and -f prints the full-format listing, which, germanely, includes the command line arguments of each process.
grep prints lines matching a pattern. We first grep for your file, which will match both the python process and the grep process. We then grep -v (invert match) for grep, paring output down to just the python process.
Output now looks like the following:
user 77864 68024 0 13:53 pts/4 00:00:00 python file.py
Next, we use awk to pull out just the second column of the output, which is the process ID or PID.
Finally we use xargs to pass the PID to kill, which asks the python process to shutdown gracefully.
kill is the command to send signals to processes.
You can use kill -9 PID to kill your python process, where 9 is the number for SIGKILL and PID is the python Process ID.

Run pkill command in bash script. Script dead [duplicate]

I'm writing a stop routine for a start-up service script:
do_stop()
{
rm -f $PIDFILE
pkill -f $DAEMON || return 1
return 0
}
The problem is that pkill (same with killall) also matches the process representing the script itself and it basically terminates itself. How to fix that?
You can explicitly filter out the current PID from the results:
kill $(pgrep -f $DAEMON | grep -v ^$$\$)
To correctly use the -f flag, be sure to supply the whole path to the daemon rather than just a substring. That will prevent you from killing the script (and eliminate the need for the above grep) and also from killing all other system processes that happen to share the daemon's name.
pkill -f accepts a full blown regex. So rather than pkill -f $DAEMON you should use:
pkill -f "^"$DAEMON
To make sure only if process name starts with the given daemon name then only it is killed.
A better solution will be to save pid (Proces Id) of the process in a file when you start the process. And for the stopping the process just read the file to get the process id to be stopped/killed.
Judging by your question, you're not hard over on using pgrep and pkill, so here are some other options commonly used.
1) Use killproc from /etc/init.d/functions or /lib/lsb/init-functions (which ever is appropriate for your distribution and version of linux). If you're writing a service script, you may already be including this file if you used one of the other services as an example.
Usage: killproc [-p pidfile] [ -d delay] {program} [-signal]
The main advantage to using this is that it sends SIGTERM, waits to see if the process terminates and sends SIGKILL only if necessary.
2) You can also use the secret sauce of killproc, which is to find the process ids to kill using pidof which has a -o option for excluding a particular process. The argument for -o could be $$, the current process id, or %PPID, which is a special variable that pidof interprets as the script calling pidof. Finally if the daemon is a script, you'll need the -x so your trying to kill the script by it's name rather than killing bash or python.
for pid in $(pidof -o %PPID -x progd); do
kill -TERM $pid
done
You can see an example of this in the article Bash: How to check if your script is already running

(re)start python script with applescript

I have a Python script that runs in an infinite loop (it's a server).
I want to write an AppleScript that will start this script if it isn't started yet, and otherwise force-quit and restart it. This will make it easy for me to make changes to the server code while programming.
Currently I only know how to start it: do shell script "python server.py"
Note that AppleScript's do shell script starts the shell (/bin/sh) in the root directory (/) by default, so you should specify an explicit path to server.py
In the following examples I'll assume directory path ~/srv.
Here's the shell command:
pid=$(pgrep -fx 'python .*/server\.py'); [ "$pid" ] && kill -9 $pid; python ~/srv/server.py
As an AppleScript statement, wrapped in do shell script - note the \-escaped inner " and \ chars.:
do shell script "pid=$(pgrep -fx 'python .*/server\\.py'); [ \"$pid\" ] && kill -9 $pid; python ~/srv/server.py"
pgrep -fx '^python .*/server\.py$' uses pgrep to find your running command by regex against the full command line (-f), requiring a full match (-x), and returns the PID (process ID), if any.
Note that I've used a more abstract regex to underscore the fact that pgrep (always) treats its search term as a regular expression.
To specify the full launch command line as the regex, use python ~/srv/server\.py - note the \-escaping of . for full robustness.
[ "$pid" ] && kill -9 $pid kills the process, if a PID was found ([ "$pid" ] is short for [ -n "$pid" ] and evaluates to true only if $pid is nonempty); -9 sends signal SIGKILL, which forcefully terminates the process.
python ~/srv/server.py then (re)starts your server.
On the shell, if you do ps aux | grep python\ server.py | head -n1, you'll get the ID of the process running server.py. You can then use kill -9 to kill that process and restart it:
kill -9 `ps aux | grep python\ server.py | head -n1 | python -c 'import sys; print(sys.stdin.read().split()[1])'`
That'll kill it. Al you have to do now is to restart it:
python server.py
You can combine the two with &&:
kill -9 `ps aux | grep python\ server.py | head -n1 | python -c 'import sys; print(sys.stdin.read().split()[1])'` && python server.py
Of course, you already know how to put that in a do shell script!

Starting and stopping celery processes in upstart with a python wrapper script

So we have an application that has celery workers. We start those workers using an upstart file /etc/init/fact-celery.conf that looks like the following:
description "FaCT Celery Worker."
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
setuid fact
setgid fact
script
[ -r /etc/default/fact ] && . /etc/default/fact
if [ "$START_CELERY" != "yes" ]; then
echo "Service disabled in '/etc/default/fact'. Not starting."
exit 1
fi
ARGUMENTS=""
if [ "$BEAT_SERVICE" = "yes" ]; then
ARGUMENTS="--beat"
fi
/usr/bin/fact celery worker --loglevel=INFO --events --schedule=/var/lib/fact/celerybeat-schedule --queues=$CELERY_QUEUES $ARGUMENTS
end script
It calls a python wrapper script that looks like the following:
#!/bin/bash
WHOAMI=$(whoami)
PYTHONPATH=/usr/share/fact
PYTHON_BIN=/opt/fact-virtual-environment/bin/python
DJANGO_SETTINGS_MODULE=fact.settings.staging
if [ ${WHOAMI} != "fact" ];
then
sudo -u fact $0 $*;
else
# Python needs access to the CWD, but we need to deal with apparmor restrictions
pushd $PYTHONPATH &> /dev/null
PYTHONPATH=${PYTHONPATH} DJANGO_SETTINGS_MODULE=${DJANGO_SETTINGS_MODULE} ${PYTHON_BIN} -m fact.managecommand $*;
popd &> /dev/null
fi
The trouble with this setup is that when we stop the service, we get left over pact-celery workers that don't die. For some reason upstart can't track the forked processes. I've read in some similar posts that upstart can't track more than two forks.
I've tried using expect fork but then upstart just hangs whenever I try to start or stop the service.
Other posts I've found on this say to call the python process directly instead of using the wrapper script, but we've already built apparmor profiles around these scripts and there are other things in our workflow that are pretty dependent on them.
Is there any way, with the current wrapper scripts, to handle killing off all the celery workers on a service stop?
There is some discussion about this in the Workers Guide, but basically the usual process is to send a TERM signal to the worker, which will cause it to wait for all the currently running tasks to finish before exiting clean.
Alternatively, you can send the KILL signal if you want it to stop immediately with potential data loss, but then as you said celery isn't able to intercept the signal and cleanup the children in that case. The only recourse that is mentioned is to manually clean up the children like this:
$ ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9

Bash: running background job and getting pid

$1 &
echo $!
is there a different way to launch a command in the background and return the pid immediately?
So when I launch bash run.sh "python worker.py" it will give me the pid of the launched job.
I am using paramiko, a python library which doesn't work with python worker.py &. so I want to create a bash script which will do this for me on the remote server.
Since you're using bash, you can just get the list of background processes from jobs, and instruct it to return the PID via the -l flag. To quote man bash:
jobs [-lnprs] [ jobspec ... ]
jobs -x command [ args ... ]
The first form lists the active jobs. The options have the
following meanings:
-l List process IDs in addition to the normal information.
So in your case, something like
jobs -l | grep 'worker.py' | awk '{print $2}' would probably give you what you want.

Categories

Resources