I am trying to put a script which will run in back ground to check process and if it is idle for 30 mins then kill
the grep i want to use is
"/usr/bin/python ./utt.py --type=issuer --datafile="
so far i have put the code as below , iam stuck at the end how to compare the time and kill . i was able to get process time and system time .how to compare and kill if the process is idle for 30 mins.
ps -auxw | grep "/usr/bin/python ./utt.py --type=issuer --datafile=" | awk '{Time=strftime("%H:%M", systime());print $2,$9,Time ;if($9<Time) kill -9 $2"}'
thanks for you help
Related
I have written a bash script with the aim to run a .py template Python script 15,000 times, each time using a slightly modified version of this .py.
After each run of one .py, the bash script logs what happened into a file.
The bash script, which works on my laptop and computes the 15,000 things.
N_SIM=15000
for ((j = 1; j <= $N_SIM; j++))
do
index=$((j))
a0=$(awk "NR==${index} { print \$1 }" values_a0_in_Xenon_10_20_10_25_Wcm2_RRon.txt)
dirname="a0_${a0}"
mkdir -p $dirname
cd $dirname
awk -v s="a0=${a0}" 'NR==6 {print s} 1 {print}' ../integration.py > integrationa0included.py
mpirun -n 1 python3 integrationa0included.py 2&> integration_Xenon.log &
cd ..
done
It launches processes and the terminal looks like (or something along these lines, the numbers are only there for illustrative purposes, they are not exact):
[1]
[2]
[3]
...
...
[45]
[1]: exit, success (a message along these lines)
[4]: exit, success
[46]
[47]
[48]
[2]: exit, success
...
And the pattern of finishing some launched processes and continuously launching new ones repeats up until the 15,000 processes are launched and completed.
I want to run this on a very old computer.
The problem is that it launches almost instantly 300 such processes and then the computer freezes. It basically crashes. I cannot do CTRL+Z or CTRL+C or type. It's frozen.
I want to ask you if there's a modification to the bash script which launches only 2 processes, waits for 1 to finish, launches the 3rd, waits for the 2nd to finish, launches the 4th, and so on.
So that there aren't so many processes waiting at any given time. Maybe this doesn't block the old computer.
Inside your loop, add the following code to the beginning of the loop body:
# insert this after `do`
[ "$(jobs -pr | wc -l)" -ge 2 ] && wait -n
If there are already two or more background jobs running this waits till at least one of the running jobs terminated.
I'm running strandtest.py to power a neo pixel matrix on a django server.
A button click runs the program
I'd like a second button click to kill it.
I've read a bunch of different solutions but I am having trouble following the instructions. Is there another subprocess command I can execute while the program runs?
This is how I start the program
import subprocess
def test(request):
sp = subprocess.Popen('sudo python /home/pi/strandtest.py', shell=True)
return HttpResponse()
I want to kill the process and clear the matrix.
You can try to run below command with subprocess.Popen:
kill -9 `ps aux | grep /home/pi/strandtest.py | awk -F' ' '{print $2}'`
kill -9 - will kill your process
ps aux combine with grep - to find only process you need
awk - get only id of that process
Here is something curious which I cannot explain. Consider this code segment:
import subprocess
command = ['tasklist.exe', '/m', 'imm32.dll']
p = subprocess.Popen(command, stdout=subprocess.PIPE)
out, err = p.communicate()
The call to communicate() hung and I had to break out with Ctrl+C. After that, the tasklist process was still running, which I had to kill it with taskkill.exe. Take a look at the behaviors of tasklist.exe in the table below:
| Command | Behavior |
|-------------------------------------|----------|
| ['tasklist.exe', '/m', 'imm32.dll'] | Hung |
| ['tasklist.exe', '/m'] | Hung |
| ['tasklist.exe'] | OK |
It seems when the /m flag is present, the process does not return. It just runs forever. I have also tried the following and it also hung:
os.system('tasklist.exe /m imm32.dll')
How can I launch this command and not hang?
Update
It turns out that the communicate call did not hang, but took up to 10 minutes to finished, something that would take a couple of seconds if I ran the command from the cmd.exe prompt. I believe this has to do with buffering, but have not found a way to make it to finish faster.
Why
import subprocess
p = subprocess.Popen(["/bin/bash", "-c", "timeout -s KILL 1 sleep 5 2>/dev/null"])
p.wait()
print(p.returncode)
returns
[stderr:] /bin/bash: line 1: 963663 Killed timeout -s KILL 1 sleep 5 2> /dev/null
[stdout:] 137
when
import subprocess
p = subprocess.Popen(["/bin/bash", "-c", "timeout -s KILL 1 sleep 5"])
p.wait()
print(p.returncode)
returns
[stdout:] -9
If you change bash to dash, you'll get 137 in both cases. I know that -9 is KILL code and 137 is 128 + 9. But seems weird for similar code to get different returncode.
Happens on Python 2.7.12 and python 3.4.3
Looks like Popen.wait() does not call Popen._handle_exitstatus https://github.com/python/cpython/blob/3.4/Lib/subprocess.py#L1468 when using /bin/bash but I could not figure out why.
This is due to the fact how bash executes timeout with or without redirection/pipes or any other bash features:
With redirection
python starts bash
bash starts timeout, monitors the process and does pipe handling.
timeout transfers itself into a new process group and starts sleep
After one second, timeout sends SIGKILL into its process group
As the process group died, bash returns from waiting for timeout, sees the SIGKILL and prints the message pasted above to stderr. It then sets its own exit status to 128+9 (a behaviour simulated by timeout).
Without redirection
python starts bash.
bash sees that it has nothing to do on its own and calls execve() to effectively replace itself with timeout.
timeout acts as above, the whole process group dies with SIGKILL.
python get's an exit status of 9 and does some mangling to turn this into -9 (SIGKILL)
In other words, without redirection/pipes/etc. bash withdraws itself from the call-chain. Your second example looks like subprocess.Popen() is executing bash, yet effectively it does not. bash is no longer there when timeout does its deed, which is why you don't get any messages and an unmangled exit status.
If you want consistent behaviour, use timeout --foreground; you'll get an exit status of 124 in both cases.
I don't know about dash; yet suppose it does not do any execve() trickery to effectively replace itself with the only program it's executing. Therefore you always see the mangled exit status of 128+9 in dash.
Update: zshshows the same behaviour, while it drops out even for simple redirections such as timeout -s KILL 1 sleep 5 >/tmp/foo and the like, giving you an exit status of -9. timeout -s KILL 1 sleep 5 && echo $? will give you status 137 in zsh also.
I run a python script and need to run a long time,but when I run a few hours it will stop,and I type ps aux the result is:
root 10371 0.9 10.4 273236 52232 ? Sl 09:35 6:23 python my_programe.py
then I try to use kill -18 10371 to call it , but useless, how can I continue to call it to run again?
The process state S doesn't mean that the process has stopped (and thus trying to use SIGCONT to continue is of course useless), rather it means Interruptible sleep (waiting for an event to complete). You should be able to see how long the process sleeps or what it waits for by using strace -p.