Run a python script independently when a method is called - python

I am writing a django application where I need to call a python script, say foo.py when a method bar is called. The script foo.py can take a lot of time to execute as it iterates over millions of rows in database. That is why I don't want to wait for its output, I want the file to be executed purely by the OS. I have tried:
execfile
os.system
subprocess.Popen
subprocess.call
But they all wait for the file to produce an output. How can I achieve this? Is there a module that I am missing or can I write an "observer script" that observes if the bar method is called, it will run the foo.py file independently without and let the method finish execution instead of waiting.

Probably, you did something incorrect, because pure subprocess.Popen doesn't wait for end of child process...
Just tried with following example:
bar.py:
import subprocess
subprocess.Popen(['python', 'foo.py'])
print '123'
foo.py:
import time
time.sleep(50)
Run bar.py:
And I immediately see the "123" output and also I see "python" in processes list

Related

Loop-execution of a set of Python scripts with arguments from the main script

I have a child python script that takes an argument and takes approx. 8 minutes to run.
e.g. python.exe child.py "2018-01-01"
I need to execute this script many times from a main script. I am considering using subprocess.Popen.
import os, sys, time, subprocess
for date in ["2018-01-01", "2018-01-02", "2018-01-03", ..., "2018-12-31"]
p = subprocess.Popen(['python.exe', "child.py", date])
time.sleep(600)
As the Popen function does not know when the child script finishes executing, it just keeps triggering the child script with the argument. So I had to need to set 600 seconds of sleep time (longer than the approximate run time for the child script) so the subsequent run safely starts after the previous run finishes.
I wonder if there is a more efficient way to dealing this situation.
If the scripts need to run synchronously, consider using subprocess. More specifilcally, the run function (>=3.5). Or even the call function (<3.5), which is the same as run but it only returns the code from the script. Both block the calling script until return.
Your code would become:
import shlex
import subprocess
for date in ["2018-01-01", "2018-01-02", "2018-01-03", ..., "2018-12-31"]:
command = 'python.exe child.py %s' % date
args = shlex.split(command)
res = subprocess.run(args)
If you need it to run asyncrhonously, consider using xargs. If you really need to do it in python, use multiprocessing our multiprocessing.dummy to do it.
how about this, call the wait() so the current subprocess is done before launching another one.
for date in ["2018-01-01", "2018-01-02", "2018-01-03", ..., "2018-12-31"]
p = subprocess.Popen(['python.exe', "child.py", date])
p.wait()
rc = p.returncode
print(rc)

Using subprocess.call to execute python file?

I am using subprocess.call in order to execute another python file. Considering that the script that will be called will never terminate the execution as it's inside an infinite loop, how can I make it possible for the original script to continue the execution after the subprocess call ?
Example:
I have script1.py which does some calculations and then calls script2.py using subprocess.call(["python", "script2.py"]), since it's inside an infinite loop the script1 gets stuck on execution, is there another way to run the file other than using subprocess module ?
subprocess.call(["python", "script2.py"]) waits for the sub-process to finish.
Just use Popen instead:
proc = subprocess.Popen(["python", "script2.py"])
You can later do proc.poll() to see whether it is finished or not, or proc.wait() to wait for it to finish (as call does), or just forget about it and do other things instead.
BTW, you might want to ensure that the same python is called, and that the OS can find it, by using sys.executable instead of just "python":
subprocess.Popen([sys.executable, "script2.py"])

Retrieving data from original python file to go to imported python file [duplicate]

This question already has answers here:
What is the best way to call a script from another script? [closed]
(16 answers)
Closed 7 years ago.
I want to run a Python script from another Python script. I want to pass variables like I would using the command line.
For example, I would run my first script that would iterate through a list of values (0,1,2,3) and pass those to the 2nd script script2.py 0 then script2.py 1, etc.
I found Stack Overflow question 1186789 which is a similar question, but ars's answer calls a function, where as I want to run the whole script, not just a function, and balpha's answer calls the script but with no arguments. I changed this to something like the below as a test:
execfile("script2.py 1")
But it is not accepting variables properly. When I print out the sys.argv in script2.py it is the original command call to first script "['C:\script1.py'].
I don't really want to change the original script (i.e. script2.py in my example) since I don't own it.
I figure there must be a way to do this; I am just confused how you do it.
Try using os.system:
os.system("script2.py 1")
execfile is different because it is designed to run a sequence of Python statements in the current execution context. That's why sys.argv didn't change for you.
This is inherently the wrong thing to do. If you are running a Python script from another Python script, you should communicate through Python instead of through the OS:
import script1
In an ideal world, you will be able to call a function inside script1 directly:
for i in range(whatever):
script1.some_function(i)
If necessary, you can hack sys.argv. There's a neat way of doing this using a context manager to ensure that you don't make any permanent changes.
import contextlib
#contextlib.contextmanager
def redirect_argv(num):
sys._argv = sys.argv[:]
sys.argv=[str(num)]
yield
sys.argv = sys._argv
with redirect_argv(1):
print(sys.argv)
I think this is preferable to passing all your data to the OS and back; that's just silly.
Ideally, the Python script you want to run will be set up with code like this near the end:
def main(arg1, arg2, etc):
# do whatever the script does
if __name__ == "__main__":
main(sys.argv[1], sys.argv[2], sys.argv[3])
In other words, if the module is called from the command line, it parses the command line options and then calls another function, main(), to do the actual work. (The actual arguments will vary, and the parsing may be more involved.)
If you want to call such a script from another Python script, however, you can simply import it and call modulename.main() directly, rather than going through the operating system.
os.system will work, but it is the roundabout (read "slow") way to do it, as you are starting a whole new Python interpreter process each time for no raisin.
I think the good practice may be something like this;
import subprocess
cmd = 'python script.py'
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
out, err = p.communicate()
result = out.split('\n')
for lin in result:
if not lin.startswith('#'):
print(lin)
according to documentation
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several older modules and functions:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
Read Here
SubProcess module:
http://docs.python.org/dev/library/subprocess.html#using-the-subprocess-module
import subprocess
subprocess.Popen("script2.py 1", shell=True)
With this, you can also redirect stdin, stdout, and stderr.
import subprocess
subprocess.call(" python script2.py 1", shell=True)

Binding / piping output of run() on/into function in python3 (lynux)

I am trying to use output of external program run using the run function.
this program regularly throws a row of data which i need to use in mine script
I have found a subprocess library and used its run()/check_output()
Example:
def usual_process():
# some code here
for i in subprocess.check_output(['foo','$$']):
some_function(i)
Now assuming that foo is already in a PATH variable and it outputs a string in semi-random periods.
I want the program to do its own things, and run some_function(i)every time foo sends new row to its output.
which boiles to two problems. piping the output into a for loop and running this as a background subprocess
Thank you
Update: I have managed to get the foo output onto some_function using This
with os.popen('foo') as foos_output:
for line in foos_output:
some_function(line)
According to this os.popen is to be deprecated, but I am yet to figure out how to pipe internal processes in python
Now just need to figure out how to run this function in a background
SO, I have solved it.
First step was to start the external script:
proc=Popen('./cisla.sh', stdout=PIPE, bufsize=1)
Next I have started a function that would read it and passed it a pipe
def foo(proc, **args):
for i in proc.stdout:
'''Do all I want to do with each'''
foo(proc).start()`
Limitations are:
If your wish t catch scripts error you would have to pipe it in.
second is that it leaves a zombie if you kill parrent SO dont forget to kill child in signal-handling

Run a Python script from another Python script, passing in arguments [duplicate]

This question already has answers here:
What is the best way to call a script from another script? [closed]
(16 answers)
Closed 8 years ago.
I want to run a Python script from another Python script. I want to pass variables like I would using the command line.
For example, I would run my first script that would iterate through a list of values (0,1,2,3) and pass those to the 2nd script script2.py 0 then script2.py 1, etc.
I found Stack Overflow question 1186789 which is a similar question, but ars's answer calls a function, where as I want to run the whole script, not just a function, and balpha's answer calls the script but with no arguments. I changed this to something like the below as a test:
execfile("script2.py 1")
But it is not accepting variables properly. When I print out the sys.argv in script2.py it is the original command call to first script "['C:\script1.py'].
I don't really want to change the original script (i.e. script2.py in my example) since I don't own it.
I figure there must be a way to do this; I am just confused how you do it.
Try using os.system:
os.system("script2.py 1")
execfile is different because it is designed to run a sequence of Python statements in the current execution context. That's why sys.argv didn't change for you.
This is inherently the wrong thing to do. If you are running a Python script from another Python script, you should communicate through Python instead of through the OS:
import script1
In an ideal world, you will be able to call a function inside script1 directly:
for i in range(whatever):
script1.some_function(i)
If necessary, you can hack sys.argv. There's a neat way of doing this using a context manager to ensure that you don't make any permanent changes.
import contextlib
#contextlib.contextmanager
def redirect_argv(num):
sys._argv = sys.argv[:]
sys.argv=[str(num)]
yield
sys.argv = sys._argv
with redirect_argv(1):
print(sys.argv)
I think this is preferable to passing all your data to the OS and back; that's just silly.
Ideally, the Python script you want to run will be set up with code like this near the end:
def main(arg1, arg2, etc):
# do whatever the script does
if __name__ == "__main__":
main(sys.argv[1], sys.argv[2], sys.argv[3])
In other words, if the module is called from the command line, it parses the command line options and then calls another function, main(), to do the actual work. (The actual arguments will vary, and the parsing may be more involved.)
If you want to call such a script from another Python script, however, you can simply import it and call modulename.main() directly, rather than going through the operating system.
os.system will work, but it is the roundabout (read "slow") way to do it, as you are starting a whole new Python interpreter process each time for no raisin.
I think the good practice may be something like this;
import subprocess
cmd = 'python script.py'
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
out, err = p.communicate()
result = out.split('\n')
for lin in result:
if not lin.startswith('#'):
print(lin)
according to documentation
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several older modules and functions:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
Read Here
SubProcess module:
http://docs.python.org/dev/library/subprocess.html#using-the-subprocess-module
import subprocess
subprocess.Popen("script2.py 1", shell=True)
With this, you can also redirect stdin, stdout, and stderr.
import subprocess
subprocess.call(" python script2.py 1", shell=True)

Categories

Resources