I'm running strandtest.py to power a neo pixel matrix on a django server.
A button click runs the program
I'd like a second button click to kill it.
I've read a bunch of different solutions but I am having trouble following the instructions. Is there another subprocess command I can execute while the program runs?
This is how I start the program
import subprocess
def test(request):
sp = subprocess.Popen('sudo python /home/pi/strandtest.py', shell=True)
return HttpResponse()
I want to kill the process and clear the matrix.
You can try to run below command with subprocess.Popen:
kill -9 `ps aux | grep /home/pi/strandtest.py | awk -F' ' '{print $2}'`
kill -9 - will kill your process
ps aux combine with grep - to find only process you need
awk - get only id of that process
Related
I need one help regarding killing application in linux
As manual process I can use command -- ps -ef | grep "app_name" | awk '{print $2}'
It will give me jobids and then I will kill using command " kill -9 jobid".
I want to have python script which can do this task.
I have written code as
import os
os.system("ps -ef | grep app_name | awk '{print $2}'")
this collects jobids. But it is in "int" type. so I am not able to kill the application.
Can you please here?
Thank you
import subprocess
temp = subprocess.run("ps -ef | grep 'app_name' | awk '{print $2}'", stdin=subprocess.PIPE, shell=True, stdout=subprocess.PIPE)
job_ids = temp.stdout.decode("utf-8").strip().split("\n")
# sample job_ids will be: ['59899', '68977', '68979']
# convert them to integers
job_ids = list(map(int, job_ids))
# job_ids = [59899, 68977, 68979]
Then iterate through the job ids and kill them. Use os.kill()
for job_id in job_ids:
os.kill(job_id, 9)
Subprocess.run doc - https://docs.python.org/3/library/subprocess.html#subprocess.run
To kill a process in Python, call os.kill(pid, sig), with sig = 9 (signal number for SIGKILL) and pid = the process ID (PID) to kill.
To get the process ID, use os.popen instead of os.system above. Alternatively, use subprocess.Popen(..., stdout=subprocess.PIPE). In both cases, call the .readline() method, and convert the return value of that to an integer with int(...).
Here is something curious which I cannot explain. Consider this code segment:
import subprocess
command = ['tasklist.exe', '/m', 'imm32.dll']
p = subprocess.Popen(command, stdout=subprocess.PIPE)
out, err = p.communicate()
The call to communicate() hung and I had to break out with Ctrl+C. After that, the tasklist process was still running, which I had to kill it with taskkill.exe. Take a look at the behaviors of tasklist.exe in the table below:
| Command | Behavior |
|-------------------------------------|----------|
| ['tasklist.exe', '/m', 'imm32.dll'] | Hung |
| ['tasklist.exe', '/m'] | Hung |
| ['tasklist.exe'] | OK |
It seems when the /m flag is present, the process does not return. It just runs forever. I have also tried the following and it also hung:
os.system('tasklist.exe /m imm32.dll')
How can I launch this command and not hang?
Update
It turns out that the communicate call did not hang, but took up to 10 minutes to finished, something that would take a couple of seconds if I ran the command from the cmd.exe prompt. I believe this has to do with buffering, but have not found a way to make it to finish faster.
I am trying to put a script which will run in back ground to check process and if it is idle for 30 mins then kill
the grep i want to use is
"/usr/bin/python ./utt.py --type=issuer --datafile="
so far i have put the code as below , iam stuck at the end how to compare the time and kill . i was able to get process time and system time .how to compare and kill if the process is idle for 30 mins.
ps -auxw | grep "/usr/bin/python ./utt.py --type=issuer --datafile=" | awk '{Time=strftime("%H:%M", systime());print $2,$9,Time ;if($9<Time) kill -9 $2"}'
thanks for you help
My goal is simple: kick off rsync and DO NOT WAIT.
Python 2.7.9 on Debian
Sample code:
rsync_cmd = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1)
rsync_cmd2 = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3} &".format(remote_user, remote_server, file1, file1)
rsync_path = "/usr/bin/rsync"
rsync_args = shlex.split("-a -e 'ssh -i /home/mysuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1))
#subprocess.call(rsync_cmd, shell=True) # This isn't supposed to work but I tried it
#subprocess.Popen(rsync_cmd, shell=True) # This is supposed to be the solution but not for me
#subprocess.Popen(rsync_cmd2, shell=True) # Adding my own shell "&" to background it, still fails
#subprocess.Popen(rsync_cmd, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True) # This doesn't work
#subprocess.Popen(shlex.split(rsync_cmd)) # This doesn't work
#os.execv(rsync_path, rsync_args) # This doesn't work
#os.spawnv(os.P_NOWAIT, rsync_path, rsync_args) # This doesn't work
#os.system(rsync_cmd2) # This doesn't work
print "DONE"
(I've commented out the execution commands only because I'm actually keeping all of my trials in my code so that I know what I've done and what I haven't done. Obviously, I would run the script with the right line uncommented.)
What happens is this...I can watch the transfer on the server and when it's finished, then I get a "DONE" printed to the screen.
What I'd like to have happen is a "DONE" printed immediately after issuing the rsync command and for the transfer to start.
Seems very straight-forward. I've followed details outlined in other posts, like this one and this one, but something is preventing it from working for me.
Thanks ahead of time.
(I have tried everything I can find in StackExchange and don't feel like this is a duplicate because I still can't get it to work. Something isn't right in my setup and need help.)
Here is verified example for Python REPL:
>>> import subprocess
>>> import sys
>>> p = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(100)'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT); print('finished')
finished
How to verify that via another terminal window:
$ ps aux | grep python
Output:
user 32820 0.0 0.0 2447684 3972 s003 S+ 10:11PM 0:00.01 /Users/user/venv/bin/python -c import time; time.sleep(100)
Popen() starts a child process—it does not wait for it to exit. You have to call .wait() method explicitly if you want to wait for the child process. In that sense, all subprocesses are background processes.
On the other hand, the child process may inherit various properties/resources from the parent such as open file descriptors, the process group, its control terminal, some signal configuration, etc—it may lead to preventing ancestors processes to exit e.g., Python subprocess .check_call vs .check_output or the child may die prematurely on Ctrl-C (SIGINT signal is sent to the foreground process group) or if the terminal session is closed (SIGHUP).
To disassociate the child process completely, you should make it a daemon. Sometimes something in between could be enough e.g., it is enough to redirect the inherited stdout in a grandchild so that .communicate() in the parent would return when its immediate child exits.
I encountered a similar issue while working with qnx devices and wanted a sub-process that runs independently of the main process and even runs after the main process terminates.
Here's the solution I found that actually works 'creationflags=subprocess.DETACHED_PROCESS':
import subprocess
import time
pid = subprocess.Popen(["python", "path_to_script\turn_ecu_on.py"], creationflags=subprocess.DETACHED_PROCESS)
time.sleep(15)
print("Done")
Link to the doc: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
In Ubuntu the following commands keep working even if python app exits.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)
I have a python script that starts another program from system (the shell script actually calls the program itself, the tag-bm.sh):
def tag(obt_path, corpora_path):
print corpora_path
os.system('cd ' + obt_path + ' && ./tag-bm.sh ' + corpora_path + ' > ' + corpora_path + '.obt')
os.system('pwd')
Sometimes this program goes into an infinite loop, which creates a problem for my main program. Is there a way to set things up so that if the program has not ended within a set time python disrupts it?
Find out the name of the command, for instance if it's cp then you can get the pid of the command using either pgrep cp or pidof cp. Note that this will return the pids of all cp processes. Call the pgrep right after you started the command and it should be on top so the pid in that case is pgrep cp | head -n1 or pidof cp | cut -s -f1. Store this variable and kill it later at a desired time.
Alternatively you could run the command with timeout you can add a value after which the command will automatically receive a kill signal. Example usage: timeout 500s cp large_file destination_file. The cp will get killed after 500 seconds in this case. `man timeout´ for more information.