I'm trying to set up a master.py script that would call a set of individual Python scripts. For sake of simplicity, let's only consider one very basic such script: counting.py which just counts up with 1 second pauses:
# counting.py file
import time
for i in range(10000):
print(f"Counting: {i}")
time.sleep(1)
In the master.py, I use subprocess.run() to call counting.py which is located in the same folder. In the snippet below, sys.executable returns the path to the Python executable in the virtual environment. I also use the multiprocessing module to control timeouts: if counting.py runs longer than 60 seconds, process must be terminated. The code in master.py is as follows:
import subprocess
import multiprocessing
import sys
from pathlib import Path
def run_file(filename):
cmd = [sys.executable, str(Path('.').cwd() / f'{filename}.py')]
try:
result = subprocess.run(cmd, shell=True, text=True, stdout=subprocess.PIPE).stdout
print("Subprocess output:\n", result)
except Exception as e:
print(f"Error at {filename} when calling the command:\n\t{cmd}")
print(f"Full traceback:\n{e}")
if __name__ == '__main__':
p = multiprocessing.Process(target=run_file, args=("counting",))
p.start()
# Wait for 60 seconds or until process finishes
p.join(60)
if p.is_alive():
print("Timeout! Killing the process...")
p.terminate()
p.join()
The issue: Even though the code itself runs properly, I am unable to log any of the output while running master.py. Based on the documentation of the subprocess module, I had the impression that the shell and stdout arguments of subprocess.run() account for this exactly. I would like to see the same output as the one I get when only running counting.py, i.e.:
Counting 1
Counting 2
Counting 3
...
Related
I want to capture an output in python.
My code:-
import os
import sys
import subprocess
import time
cmd = './abc'
proc = subprocess.Popen(cmd)
time.sleep(12)
stdoutOrigin=sys.stdout
sys.stdout = open("log.txt", "w")
sys.stdout.close()
sys.stdout=stdoutOrigin
proc.terminate()
Problem is it never comes out of ./abc and is always stuck there. I need kill the process .
Normally i have to give CTRL+C to come out of it.
In this case how can i capture the output and save in a file which comes every 30 seconds .I need to capture it once.
You can redirect the output of a subprocess when starting it. Consider the following:
with open('proc.out', 'w') as proc_out:
subprocess.run(cmd, stdout=proc_out)
The call to run is blocking, but all the output is written to the output file. Once your subprocess finished, so does your Python script. You can still kill it prematurely, however.
Ok i found the answer and since Using only timeout would throw error
what i did was i added it to try and expect block.
import os
import sys
import subprocess
import time
cmd = './abc'
try:
with open('proc.out', 'w') as proc_out:
subprocess.run(cmd, stdout=proc_out , timeout = 30)
except subprocess.TimeoutExpired:
pass
I have a cmd file "file.cmd" containing 100s of lines of command.
Example
pandoc --extract-media -f docx -t gfm "sample1.docx" -o "sample1.md"
pandoc --extract-media -f docx -t gfm "sample2.docx" -o "sample2.md"
pandoc --extract-media -f docx -t gfm "sample3.docx" -o "sample3.md"
I am trying to run these commands using a script so that I don't have to go to a file and click on it.
This is my code, and it results in no output:
file1 = open('example.cmd', 'r')
Lines = file1.readlines()
# print(Lines)
for i in Lines:
print(i)
os.system(i)
You don't need to read the cmd file line by line. you can simply try the following:
import os
os.system('myfile.cmd')
or using the subprocess module:
import subprocess
p = subprocess.Popen(['myfile.cmd'], shell = True, close_fds = True)
stdout, stderr = proc.communicate()
Example:
myfile.cmd:
#ECHO OFF
ECHO Grettings From Python!
PAUSE
script.py:
import os
os.system('myfile.cmd')
The cmd will open with:
Greetings From Python!
Press any key to continue ...
You can debug the issue by knowing the return exit code by:
import os
return_code=os.system('myfile.cmd')
assert return_code == 0 #asserts that the return code is 0 indicating success!
Note: os.system works by calling system() in C can only take up to 65533 arguments after a command (so it is a 16 bit issue). Giving one more argument will result in the return code 32512 (which implies the exit code 127).
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function (os.system('command')).
since it is a command file (cmd), and only the shell can run it, then shell argument must set to be true. since you are setting the shell argument to true, the command needs to be string form and not a list.
use the Popen method for spawn a new process and the communicte for waiting on that process (you can time it out as well). if you whish to communicate with the child process, provide the PIPES (see mu example, but you dont have to!)
the code below for python 3.3 and beyond
import subprocess
try:
proc=subprocess.Popen('myfile.cmd', shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
outs, errs = proc.communicate(timeout=15) #timing out the execution, just if you want, you dont have to!
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
for older python versions
proc = subprocess.Popen('myfile.cmd', shell=True)
t=10
while proc.poll() is None and t >= 0:
print('Still waiting')
time.sleep(1)
t -= 1
proc.kill()
In both cases (python versions) if you dont need the timeout feature and you dont need to interact with the child process, then just, use:
proc = subprocess.Popen('myfile.cmd', shell=True)
proc.communicate()
I'm trying to integrate ESA'2 sen2cor python-script into my workflow. To do this I create a subprocess with which I call the "L2A_Process.bat" file, which in turn calls the "L2A_Process.py" script.
I want to launch the sen2cor-script with a timeout since it gets stuck and freezes from time to time, so as to automatically re-launch it if it failed.
To launch it and catch a timeout I successfully used the following script:
import os, subprocess
from signal import CTRL_BREAK_EVENT
timeout = 3600 #1hour
l1c_safe_path = "path/to/my/input/file.SAFE" #product that I want to process
command = ["L2A_process.bat", l1c_safe_path]
p = subprocess.Popen(command,shell=False, creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
try:
p.wait(timeout)
except subprocess.TimeoutExpired:
os.kill(p.pid, CTRL_BREAK_EVENT)
This is as far as I got. It results in the sen2cor-script being paused giving the following output:
Terminate batch job (Y/N)
I'd like to know how I can properly terminate my subprocess "p" with all it own child-subprocesses (i.e. "L2A_Process.py").
Some observations:
This script needs to run on Windows;
I've tried to kill the subprocess without the creationflag I've used in the example above: this results in the subprocess being killed but the "L2A_Process.py" script deteaches an keeps running (which is the core of my problem);
I cannot use a CTRL_C_EVENT since I want to re-launch the failed "L2A_Process.py" in a loop until it succeeds.
This code works for me to monitor Sen2cor status while converting L1C to L2A for Sentinel 2 data. The Sen2cor process is time and cpu consuming so be patient. It took half an hour to create L2A with DEM, DDV, etc. Hope it helps
from subprocess import Popen, PIPE
import os
pathtoprodS1C = "path_to_L1C_product" // safe file
outdirS2A = "output_dir" // where L2A files will be placed
pathtoL2Process = "path_to_L2A_Process" //if not in path
pathtoGIPP = "path_to_your_GIPP/L2A_GIPP.xml"
procName = "./L2A_Process"
os.chdir(pathtoL2Process)
import shlex
pcall = "./{} {} --output_dir {} --tif --GIP_L2A {}".format(procName,
pathtoprodS1C,
pathtoprodS2A,
pathtoGIPP)
args = shlex.split(pcall)
print(args)
try:
p = Popen(args, stdout=PIPE)
eut = p.stdout.readline()
while eut:
eut = p.stdout.readline()
print(eut)
finally:
print('Sen2Cor is Done')
exit()
Okay I'm officially out of ideas after running each and every sample I could find on google up to 19th page. I have a "provider" script. The goal of this python script is to start up other services that run indefinitely even after this "provider" stopped running. Basically, start the process then forget about it but continue the script and not stopping it...
My problem: python-daemon... I have actions (web-service calls to start/stop/get status from the started services). I create the start commands on the fly and perform variable substitution on the config files as required.
Let's start from this point: I have a command to run (A bash script that executes a java process - a long running service that will be stopped sometime later).
def start(command, working_directory):
pidfile = os.path.join(working_directory, 'application.pid')
# I expect the pid of the started application to be here. The file is not created. Nothing is there.
context = daemon.DaemonContext(working_directory=working_directory,
pidfile=daemon.pidfile.PIDLockFile(pidfile))
with context:
psutil.Popen(command)
# This part never runs. Even if I put a simple print statement at this point, that never appears. Debugging in pycharms shows that my script returns with 0 on with context
with open(pidfile, 'r') as pf:
pid = pf.read()
return pid
From here on in my caller to this method I prepare a json object to return to the client which essentially contains an instance_id (don't mind it) and a pid (that'll be used to stop this process in another request.
What happens? After with context my application exits with status 0, nothing is returned, no json response gets created, no pidfile gets created only the executed psutil.Popen command runs. How can I achieve what I need? I need an independently running process and need to know its PID in order to stop it later on. The executed process must run even if the current python script stops for some reason. I can't get around the shell script as that application is not mine I have to use what I have.
Thanks for any tip!
#Edit:
I tried using simply the Popen from psutil/subprocess with somewhat more promising result.
def start(self, command):
import psutil/subprocess
proc = psutil.Popen(command)
return str(proc.pid)
Now If I debug the application and wait some undefined time on the return statement everything is working great! The service is running the pid is there, I can stop later on. Then I simply ran the provider without debugging. It returns the pid but the process is not running. Seems like Popen has no time to start the service because the whole provider stops faster.
#Update:
Using os.fork:
#staticmethod
def __start_process(command, working_directory):
pid = os.fork()
if pid == 0:
os.chdir(working_directory)
proc = psutil.Popen(command)
with open('application.pid', 'w') as pf:
pf.write(proc.pid)
def start(self):
...
__start_process(command, working_directory)
with open(os.path.join(working_directory, 'application.pid'), 'r') as pf:
pid = int(pf.read())
proc = psutil.Process(pid)
print("RUNNING" if proc.status() == psutil.STATUS_RUNNING else "...")
After running the above sample, RUNNING is written on console. After the main script exits because I'm not fast enough:
ps auxf | grep
No instances are running...
Checking the pidfile; sure it's there it was created
cat /application.pid
EMPTY 0bytes
From multiple partial tips i got, finally managed to get it working...
def start(command, working_directory):
pid = os.fork()
if pid == 0:
os.setsid()
os.umask(0) # I'm not sure about this, not on my notebook at the moment
os.execv(command[0], command) # This was strange as i needed to use the name of the shell script twice: command argv[0] [args]. Upon using ksh as command i got a nice error...
else:
with open(os.path.join(working_directory, 'application.pid'), 'w') as pf:
pf.write(str(pid))
return pid
That together solved the issue. The started process is not a child process of the running python script and won't stop when the script terminates.
Have you tried with os.fork()?
In a nutshell, os.fork() spawns a new process and returns the PID of that new process.
You could do something like this:
#!/usr/bin/env python
import os
import subprocess
import sys
import time
command = 'ls' # YOUR COMMAND
working_directory = '/etc' # YOUR WORKING DIRECTORY
def child(command, directory):
print "I'm the child process, will execute '%s' in '%s'" % (command, directory)
# Change working directory
os.chdir(directory)
# Execute command
cmd = subprocess.Popen(command
, shell=True
, stdout=subprocess.PIPE
, stderr=subprocess.PIPE
, stdin=subprocess.PIPE
)
# Retrieve output and error(s), if any
output = cmd.stdout.read() + cmd.stderr.read()
print output
# Exiting
print 'Child process ending now'
sys.exit(0)
def main():
print "I'm the main process"
pid = os.fork()
if pid == 0:
child(command, working_directory)
else:
print 'A subprocess was created with PID: %s' % pid
# Do stuff here ...
time.sleep(5)
print 'Main process ending now.'
sys.exit(0)
if __name__ == '__main__':
main()
Further info:
Documentation: https://docs.python.org/2/library/os.html#os.fork
Examples: http://www.python-course.eu/forking.php
Another related-question: Regarding The os.fork() Function In Python
I'm trying to write some short script in python which would start another python code in subprocess if is not already started else terminate terminal & app (Linux).
So it looks like:
#!/usr/bin/python
from subprocess import Popen
text_file = open(".proc", "rb")
dat = text_file.read()
text_file.close()
def do(dat):
text_file = open(".proc", "w")
p = None
if dat == "x" :
p = Popen('python StripCore.py', shell=True)
text_file.write( str( p.pid ) )
else :
text_file.write( "x" )
p = # Assign process by pid / pid from int( dat )
p.terminate()
text_file.close()
do( dat )
Have problem of lacking knowledge to name proces by pid which app reads from file ".proc".
The other problem is that interpreter says that string named dat is not equal to "x" ??? What I've missed ?
Using the awesome psutil library it's pretty simple:
p = psutil.Process(pid)
p.terminate() #or p.kill()
If you don't want to install a new library, you can use the os module:
import os
import signal
os.kill(pid, signal.SIGTERM) #or signal.SIGKILL
See also the os.kill documentation.
If you are interested in starting the command python StripCore.py if it is not running, and killing it otherwise, you can use psutil to do this reliably.
Something like:
import psutil
from subprocess import Popen
for process in psutil.process_iter():
if process.cmdline() == ['python', 'StripCore.py']:
print('Process found. Terminating it.')
process.terminate()
break
else:
print('Process not found: starting it.')
Popen(['python', 'StripCore.py'])
Sample run:
$python test_strip.py #test_strip.py contains the code above
Process not found: starting it.
$python test_strip.py
Process found. Terminating it.
$python test_strip.py
Process not found: starting it.
$killall python
$python test_strip.py
Process not found: starting it.
$python test_strip.py
Process found. Terminating it.
$python test_strip.py
Process not found: starting it.
Note: In previous psutil versions cmdline was an attribute instead of a method.
I wanted to do the same thing as, but I wanted to do it in the one file.
So the logic would be:
if a script with my name is running, kill it, then exit
if a script with my name is not running, do stuff
I modified the answer by Bakuriu and came up with this:
from os import getpid
from sys import argv, exit
import psutil ## pip install psutil
myname = argv[0]
mypid = getpid()
for process in psutil.process_iter():
if process.pid != mypid:
for path in process.cmdline():
if myname in path:
print "process found"
process.terminate()
exit()
## your program starts here...
Running the script will do whatever the script does. Running another instance of the script will kill any existing instance of the script.
I use this to display a little PyGTK calendar widget which runs when I click the clock. If I click and the calendar is not up, the calendar displays. If the calendar is running and I click the clock, the calendar disappears.
So, not directly related but this is the first question that appears when you try to find how to terminate a process running from a specific folder using Python.
It also answers the question in a way(even though it is an old one with lots of answers).
While creating a faster way to scrape some government sites for data I had an issue where if any of the processes in the pool got stuck they would be skipped but still take up memory from my computer. This is the solution I reached for killing them, if anyone knows a better way to do it please let me know!
import pandas as pd
import wmi
from re import escape
import os
def kill_process(kill_path, execs):
f = wmi.WMI()
esc = escape(kill_path)
temp = {'id':[], 'path':[], 'name':[]}
for process in f.Win32_Process():
temp['id'].append(process.ProcessId)
temp['path'].append(process.ExecutablePath)
temp['name'].append(process.Name)
temp = pd.DataFrame(temp)
temp = temp.dropna(subset=['path']).reset_index().drop(columns=['index'])
temp = temp.loc[temp['path'].str.contains(esc)].loc[temp.name.isin(execs)].reset_index().drop(columns=['index'])
[os.system('taskkill /PID {} /f'.format(t)) for t in temp['id']]