Combine all the kill bash command in one line - python

I am using the following command to kill my servers at the moment and want to combine them into one.
1- ps aux | grep 'python manage.py runserver'
sudo kill -9 $PID
2- ps aux | grep 'python -m SimpleHTTPServer'
sudo kill -9 $PID
3- ps aux | grep 'aptly api server'
sudo kill -9 $PID
Is there a way to kill all the three processes using a single command? or atleast combine them.
EDIT: I am trying the below command but it is just print out single PID number.

ps aux | egrep -i 'python manage.py runserver|aptly api serve|python -m SimpleHTTPServer' | awk '{print $2}' | xargs kill -9

Related

how to kill python program in linux using name of the program

when i use ps -ef |grep i get the current running programs
if below shown are the currently running programs.How can i stop a program using the name of the program
user 8587 8577 30 12:06 pts/9 00:03:07 python3 program1.py
user 8588 8579 30 12:06 pts/9 00:03:08 python3 program2.py
eg. If i want to stop program1.py then how can i stop the process using the program name "program1.py"
.If any suggestions on killing the program with python will be great
By using psutil is fairly easy
import psutil
proc = [p for p in psutil.process_iter() if 'program.py' in p.cmdline()]
proc[0].kill()
To find out the process from the process name filter through the process list with psutil like in Cross-platform way to get PIDs by process name in python
Try doing this with the process name:
pkill -f "Process name"
For eg. If you want to kill the process "program1.py", type in:
pkill -f "program1.py"
Let me know if it helps!
Assuming you have pkill utility installed, you can just use:
pkill program1.py
If you don't, using more common Linux commands:
kill $(ps -ef | grep program1.py | awk '{print $2}')
If you insist on using Python for that, see How to terminate process from Python using pid?
grep the program and combine add pipe send the output in another command.
1. see program ps -ef.
2.search program grep program.
3. remove the grep that you search because is appear in the search process grep -v grep.
4.separate the process to kill with awk awk '{ print $2 }'
5. apply cmd on the previous input xarks kill -9
ps -ef | grep progam | grep -v grep | awk '{ print $2 }' | xargs kill -9
see here for more:
about pipe , awk, xargs
with python you can use os:
template = "ps -ef | grep {program} | grep -v grep | awk '{{ print $2 }}' | xargs kill -9"
import os
os.system(template.format(program="work.py"))

Get source file path of a running python script from process id

I have a process running in the background, a python one, with ps -ef I can see filename from running command : UID PID PPID ... python ./filename.py
How can I know where the file is located
pwdx < PID > gives full directory the process is running from.
So, the full script would be
ps -ef | grep 'your process' | awk '{print $2}' | xargs pwdx
Though, you can simplify this into
pgrep 'your process' | awk '{print $1}' | xargs pwdx

Closing iPython while script is running

I closed iPython explicitly when a script was running. The script didn't stop and is still running in the background, creating some output files. How can I stop the script?
You can kill the ipython process by this command
ps aux | grep ipython | grep -v "grep ipython" | awk '{print $2}' | xargs kill -9

python subprocess.check_output giving wrong output

I have problem execute this code here
subprocess.check_output(['ps -ef | grep ftp | wc -l'],env=environ,shell=True)
When I execute from terminal
ps -ef | grep ftp | wc -l
I get "1" as output which is fine.
Now, I execute same code from my python files as subprocess.check_output and it gives me 2. That is strange. Any Ideas why is it happening. Here is the complete code:
def countFunction():
environ = dict(os.environ)
return subprocess.check_output(['ps -ef | grep ftp | wc -l'],env=environ,shell=True)
count = countFunction()
print count
EDIT:
Just to update , I do not have any ftp connections on.So command line is printing 1 on command which is fine.
Thanks
Arvind
The grep command will find itself:
$ ps -ef | grep ftp
wallyk 12546 12326 0 16:25 pts/3 00:00:00 grep ftp
If you don't want that, exclude the grep command:
$ ps -ef | grep ftp | grep -v ftp
$
It would be better to drop the -f switch to ps so that the command line arguments are not searched. That way, it won't find the grep ftp running:
$ ps -e | grep ftp | wc -l

How to stop celery worker process

I have a Django project on an Ubuntu EC2 node, which I have been using to set up an asynchronous using Celery.
I am following this along with the docs.
I've been able to get a basic task working at the command line, using:
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ celery --app=myproject.celery:app worker --loglevel=INFO
To start a worker. I have since made some changes to the Python, but realized that I need to restart a worker.
From the command line, I've tried:
ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
But I can see that the worker is still running.
How can I kill it?
edit:
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ sudo ps auxww | grep celeryd | grep -v "grep" | awk '{print $2}' | sudo xargs kill -HUP
kill: invalid argument H
Usage:
kill [options] <pid> [...]
Options:
<pid> [...] send signal to every <pid> listed
-<signal>, -s, --signal <signal>
specify the <signal> to be sent
-l, --list=[<signal>] list all signal names, or convert one to a name
-L, --table list all signal names in a nice table
-h, --help display this help and exit
-V, --version output version information and exit
For more details see kill(1).
edit 2:
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ ps aux|grep celery
ubuntu 9756 0.0 3.4 100868 35508 pts/6 S+ 15:49 0:07 /home/ubuntu/.virtualenvs/env1/bin/python3.4 /home/ubuntu/.virtualenvs/env1/bin/celery --app=tp.celery:app worker --loglevel=INFO
ubuntu 9760 0.0 3.9 255840 39852 pts/6 S+ 15:49 0:05 /home/ubuntu/.virtualenvs/env1/bin/python3.4 /home/ubuntu/.virtualenvs/env1/bin/celery --app=tp.celery:app worker --loglevel=INFO
ubuntu 12760 0.0 0.0 10464 932 pts/7 S+ 19:04 0:00 grep --color=auto celery
Try this in terminal
ps aux|grep 'celery worker'
You will see like this
username 29042 0.0 0.6 23216 14356 pts/1 S+ 00:18 0:01 /bin/celery worker ...
Then kill process id by
sudo kill -9 process_id # here 29042
If you have multiple processes, then you have to kill all process id using above kill commmand
sudo kill -9 id1 id2 id3 ...
From the celery doc
ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
OR if you are running celeryd
ps auxww | grep celeryd | awk '{print $2}' | xargs kill -9
Note
If you are running celery in supervisor, even though kill the process, it automatically restarts(if autorestart=True in supervisor script).
pkill -f "celery worker"
easy to kill process by string patterns
If the celery worker is running on a machine you do not have access to, you can use Celery "remote control" to control workers through messages sent via the broker.
celery control shutdown
This will kill all workers immediately. Depending on your setup, you might have to use -A myProject, like with Django.
Documentation here.
ps auxww | grep 'celery worker' | grep -v " grep " | awk '{print $2}' | xargs kill -9
this one is very similar to one presented before but improved because avoid the error that shows when attempt to kill the grep process..
In case someone's looking to shutdown their celery app programmatically, the same thing can be done in python with:
celery_app.control.shutdown()

Categories

Resources