There is a way to start another script in python by doing this:
import os
os.system("python [name of script].py")
So how can i stop another already running script? I would like to stop the script by using the name.
It is more usual to import the other script and then invoke its functions and methods.
If that does not work for you, e.g. the other script is not written in such a way that is conducive to being imported, then you can use the subprocess module to start another process.
import subprocess
p = subprocess.Popen(['python', 'script.py', 'arg1', 'arg2'])
# continue with your code then terminate the child
p.terminate()
There are many possible ways to control and interact with the child process, e.g. you can can capture its stdout and sterr, and send it input. See the Popen() documentation.
If you start the script as per mhawkes suggestion is it a better option but to answer your question of how to kill an already started script by name you can use pkill and subprocess.check_call:
from subprocess import check_call
import sys
script = sys.argv[1]
check_call(["pkill", "-9", "-f", script])
Just pass the name to kill:
padraic#home:~$ cat kill.py
from subprocess import check_call
import sys
script = sys.argv[1]
check_call(["pkill", "-9", "-f", script])
padraic#home:~$ cat test.py
from time import sleep
while True:
sleep(1)
padraic#home:~$ python test.py &
[60] 23361
padraic#home:~$ python kill.py test.py
[60] Killed python test.py
Killed
That kills the process with a SIGKIll, if you want to terminate remove the -9:
padraic#home:~$ cat kill.py
from subprocess import check_call
import sys
script = sys.argv[1]
check_call(["pkill", "-f", script])
padraic#home:~$ python test.py &
[61] 24062
padraic#home:~$ python kill.py test.py
[61] Terminated python test.py
Terminated
That will send a SIGTERM. Termination-Signals
Just put in the name of the program NOT the path to your script.
so it would be
check_call(["pkill", "-f", "MotionDetector.py"])
Related
I'm trying to set up a master.py script that would call a set of individual Python scripts. For sake of simplicity, let's only consider one very basic such script: counting.py which just counts up with 1 second pauses:
# counting.py file
import time
for i in range(10000):
print(f"Counting: {i}")
time.sleep(1)
In the master.py, I use subprocess.run() to call counting.py which is located in the same folder. In the snippet below, sys.executable returns the path to the Python executable in the virtual environment. I also use the multiprocessing module to control timeouts: if counting.py runs longer than 60 seconds, process must be terminated. The code in master.py is as follows:
import subprocess
import multiprocessing
import sys
from pathlib import Path
def run_file(filename):
cmd = [sys.executable, str(Path('.').cwd() / f'{filename}.py')]
try:
result = subprocess.run(cmd, shell=True, text=True, stdout=subprocess.PIPE).stdout
print("Subprocess output:\n", result)
except Exception as e:
print(f"Error at {filename} when calling the command:\n\t{cmd}")
print(f"Full traceback:\n{e}")
if __name__ == '__main__':
p = multiprocessing.Process(target=run_file, args=("counting",))
p.start()
# Wait for 60 seconds or until process finishes
p.join(60)
if p.is_alive():
print("Timeout! Killing the process...")
p.terminate()
p.join()
The issue: Even though the code itself runs properly, I am unable to log any of the output while running master.py. Based on the documentation of the subprocess module, I had the impression that the shell and stdout arguments of subprocess.run() account for this exactly. I would like to see the same output as the one I get when only running counting.py, i.e.:
Counting 1
Counting 2
Counting 3
...
I am new to python. I am trying to enter two commands in my script.
1) script filename.txt
2) ssh xyz#xyz.com
My script stops after the 1st command in executed. When I exit out of bash the 2nd command is execute. I tried 2 different scripts both have the same issue.
1) Script-1
import os
import subprocess
from subprocess import call
from datetime import datetime
call (["script","{}.txt".format(str(datetime.now()))])
echo "ssh xyz#xyz.com"
2) Script-2
call (["script","{}.txt".format(str(datetime.now()))])
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print proc_stdout
subprocess_cmd('ssh ssh xyz#xyz.com')
I have a script that gives the option to run a second script after completion. I am wondering if there is a good way for the second script to know if it was run on its own or as a subprocess. If it was called as a subprocess, pass args into the second script.
The end of the first script is below:
dlg = wx.MessageDialog(None, "Shall we format?",'Format Files',wx.YES_NO | wx.ICON_QUESTION)
result = dlg.ShowModal()
if result == wx.ID_YES:
call("Threading.py", shell=True)
else:
pass
The second script is a standalone script that takes 3 files and formats them into one. The args would just set file names in the second script.
So I would retrieve the parent process pid with os.getppid() and pass then this to the subprocess as arguments using Popen:
(parent.py)
#!/usr/bin/env python
import sys
import os
from subprocess import Popen, PIPE
output = Popen(['./child.py', str( os.getppid() )], stdout=PIPE)
print output.stdout.read()
and
(child.py)
#!/usr/bin/env python
import sys
import os
parent_pid = sys.argv[1]
my_pid = str(os.getppid())
print "Parent is %s child is %s " % ( parent_pid, my_pid )
So when you call the child from the parent
$ ./parent.py
Parent is 72297 child is 72346
At this point is easy to make a comparison and check the pid.
I need to write a piece of code that will check if another instance of a python script with the same name is running, kill it and all its children and do the job of the script. On my way to solution I have this code:
import os, sys
import time
import subprocess
from subprocess import PIPE,Popen
import os
import signal
if sys.argv[1] == 'boss':
print 'I am boss', \
"pid:", os.getpid(), \
"pgid:", os.getgid()
# kill previous process with same name
my_pid=os.getpid()
pgrep = Popen(['pgrep', '-f', 'subp_test.py'], stdout=PIPE)
prev_pids=pgrep.communicate()[0].split()
print 'previous processes:' , prev_pids
for pid in prev_pids:
if int(pid) !=int(my_pid):
print 'killing', pid
os.kill(int(pid), signal.SIGKILL)
# do the job
subprocess.call('python subp_test.py 1', shell=True)
subprocess.call('python subp_test.py 2', shell=True)
subprocess.call('python some_other_script.py', shell=True)
else:
p_num = sys.argv[1]
for i in range(20):
time.sleep(1)
print 'child', p_num, \
"pid:", os.getpid(), \
"pgid:", os.getgid(), \
":", i
This will kill all processes that have the substring 'subp_test.py' in its command but will not kill some_other_script.py or other programs without the 'subp_test.py' in it.
The calls the script subp_test.py will execute are unexpected but as I understand, they are supposed to be under it in the process tree.
So how do I access all the children of subp_test.py in order to kill them when a new instance of subp_test.py begins to run?
Also, are there better approaches to implement this logic?
I use Python 2.6.5 on Ubuntu 10.04.
Run your program under a new session group and write the session group id to a file. When your program starts, kill -SIGNAL -prevsessiongroup. This will kill all processes and their children etc unless one of them explicitly changes the session group. I have included urls which contain snippets of code you can use.
https://docs.python.org/2/library/os.html#os.setsid
http://web.archive.org/web/20131017130434/http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
you can use this command to kill all resources allocated to the python script:
kill -9 `ps -ef | grep your_python_script.py | grep -v grep | awk '{print $2}'`
Summary: I am ssh'ing to a remote server and executing a fork1.py script over there which is shown below. But the trouble is that I want the processes to execute in the background, so that I can start multiple services.
I know we can use nohup, etc. but they are not working. Even when I use a & at the end, the process starts, but gets killed when the script terminates.
Here is the code:
import os
import sys
import commands
from var import key_loc
import subprocess
import pipes
import shlex
def check(status):
if status != 0:
print 'Error! '
quit()
else:
print 'Success :) '
file1=open('/home/modiuser/status.txt','a')
file1.write("Success :)\n")
if(sys.argv[1]=="ES"):
os.chdir('/home/modiuser/elasticsearch-0.90.0/bin/')
proc1=subprocess.Popen(shlex.split("nohup ./elasticsearch -p /home/modiuser/es.pid"))
if(sys.argv[1]=="REDIS"):
os.chdir('/home/modiuser/redis-2.6.13/src')
proc2=subprocess.Popen(shlex.split("./redis_ss -p /home/modiuser/redis.pid"))
if(sys.argv[1]=="PARSER"):
proc3=subprocess.Popen(shlex.split("nohup java -jar logstash-1.1.12-flatjar.jar agent -f parser.conf"))
file1=open('/home/modiuser/pid.txt','a')
file1.write("PARSER-"+str(proc3.pid)+"\n")
file1.write(str(proc3.poll()))
file1.close()
if(sys.argv[1]=="SHIPPER_TCP"):
proc4=subprocess.Popen(shlex.split("nohup java -jar logstash-1.1.12-flatjar.jar agent -f shipper_TCP.conf"))
file1=open('/home/modiuser/pid.txt','a')
file1.write("SHIPPER_TCP-"+str(proc4.pid)+"\n")
file1.close()
Where am I going wrong?
just try with
import os
os.system('python program1.py &') #this one runs in the background
os.system('python program2.py') #this one runs in the foreground