This question already has answers here:
What is the best way to call a script from another script? [closed]
(16 answers)
Closed 8 years ago.
I want to run a Python script from another Python script. I want to pass variables like I would using the command line.
For example, I would run my first script that would iterate through a list of values (0,1,2,3) and pass those to the 2nd script script2.py 0 then script2.py 1, etc.
I found Stack Overflow question 1186789 which is a similar question, but ars's answer calls a function, where as I want to run the whole script, not just a function, and balpha's answer calls the script but with no arguments. I changed this to something like the below as a test:
execfile("script2.py 1")
But it is not accepting variables properly. When I print out the sys.argv in script2.py it is the original command call to first script "['C:\script1.py'].
I don't really want to change the original script (i.e. script2.py in my example) since I don't own it.
I figure there must be a way to do this; I am just confused how you do it.
Try using os.system:
os.system("script2.py 1")
execfile is different because it is designed to run a sequence of Python statements in the current execution context. That's why sys.argv didn't change for you.
This is inherently the wrong thing to do. If you are running a Python script from another Python script, you should communicate through Python instead of through the OS:
import script1
In an ideal world, you will be able to call a function inside script1 directly:
for i in range(whatever):
script1.some_function(i)
If necessary, you can hack sys.argv. There's a neat way of doing this using a context manager to ensure that you don't make any permanent changes.
import contextlib
#contextlib.contextmanager
def redirect_argv(num):
sys._argv = sys.argv[:]
sys.argv=[str(num)]
yield
sys.argv = sys._argv
with redirect_argv(1):
print(sys.argv)
I think this is preferable to passing all your data to the OS and back; that's just silly.
Ideally, the Python script you want to run will be set up with code like this near the end:
def main(arg1, arg2, etc):
# do whatever the script does
if __name__ == "__main__":
main(sys.argv[1], sys.argv[2], sys.argv[3])
In other words, if the module is called from the command line, it parses the command line options and then calls another function, main(), to do the actual work. (The actual arguments will vary, and the parsing may be more involved.)
If you want to call such a script from another Python script, however, you can simply import it and call modulename.main() directly, rather than going through the operating system.
os.system will work, but it is the roundabout (read "slow") way to do it, as you are starting a whole new Python interpreter process each time for no raisin.
I think the good practice may be something like this;
import subprocess
cmd = 'python script.py'
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
out, err = p.communicate()
result = out.split('\n')
for lin in result:
if not lin.startswith('#'):
print(lin)
according to documentation
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several older modules and functions:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
Read Here
SubProcess module:
http://docs.python.org/dev/library/subprocess.html#using-the-subprocess-module
import subprocess
subprocess.Popen("script2.py 1", shell=True)
With this, you can also redirect stdin, stdout, and stderr.
import subprocess
subprocess.call(" python script2.py 1", shell=True)
Related
This question already has answers here:
What is the best way to call a script from another script? [closed]
(16 answers)
Closed 7 years ago.
I want to run a Python script from another Python script. I want to pass variables like I would using the command line.
For example, I would run my first script that would iterate through a list of values (0,1,2,3) and pass those to the 2nd script script2.py 0 then script2.py 1, etc.
I found Stack Overflow question 1186789 which is a similar question, but ars's answer calls a function, where as I want to run the whole script, not just a function, and balpha's answer calls the script but with no arguments. I changed this to something like the below as a test:
execfile("script2.py 1")
But it is not accepting variables properly. When I print out the sys.argv in script2.py it is the original command call to first script "['C:\script1.py'].
I don't really want to change the original script (i.e. script2.py in my example) since I don't own it.
I figure there must be a way to do this; I am just confused how you do it.
Try using os.system:
os.system("script2.py 1")
execfile is different because it is designed to run a sequence of Python statements in the current execution context. That's why sys.argv didn't change for you.
This is inherently the wrong thing to do. If you are running a Python script from another Python script, you should communicate through Python instead of through the OS:
import script1
In an ideal world, you will be able to call a function inside script1 directly:
for i in range(whatever):
script1.some_function(i)
If necessary, you can hack sys.argv. There's a neat way of doing this using a context manager to ensure that you don't make any permanent changes.
import contextlib
#contextlib.contextmanager
def redirect_argv(num):
sys._argv = sys.argv[:]
sys.argv=[str(num)]
yield
sys.argv = sys._argv
with redirect_argv(1):
print(sys.argv)
I think this is preferable to passing all your data to the OS and back; that's just silly.
Ideally, the Python script you want to run will be set up with code like this near the end:
def main(arg1, arg2, etc):
# do whatever the script does
if __name__ == "__main__":
main(sys.argv[1], sys.argv[2], sys.argv[3])
In other words, if the module is called from the command line, it parses the command line options and then calls another function, main(), to do the actual work. (The actual arguments will vary, and the parsing may be more involved.)
If you want to call such a script from another Python script, however, you can simply import it and call modulename.main() directly, rather than going through the operating system.
os.system will work, but it is the roundabout (read "slow") way to do it, as you are starting a whole new Python interpreter process each time for no raisin.
I think the good practice may be something like this;
import subprocess
cmd = 'python script.py'
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
out, err = p.communicate()
result = out.split('\n')
for lin in result:
if not lin.startswith('#'):
print(lin)
according to documentation
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several older modules and functions:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
Read Here
SubProcess module:
http://docs.python.org/dev/library/subprocess.html#using-the-subprocess-module
import subprocess
subprocess.Popen("script2.py 1", shell=True)
With this, you can also redirect stdin, stdout, and stderr.
import subprocess
subprocess.call(" python script2.py 1", shell=True)
I would like process a file line by line. However I need to sort it first which I normally do by piping:
sort --key=1,2 data |./script.py.
What's the best to call sort from within python? Searching online I see subprocess or the sh module might be possibilities? I don't want to read the file into memory and sort in python as the data is very big.
Its easy. Use subprocess.Popen to run sort and read its stdout to get your data.
import subprocess
myfile = 'data'
sort = subprocess.Popen(['sort', '--key=1,2', myfile],
stdout=subprocess.PIPE)
for line in sort.stdout:
your_code_here
sort.wait()
assert sort.returncode == 0, 'sort failed'
I think this page will answer your question
The answer I prefer, from #Eli Courtwright is (all quoted verbatim):
Here's a summary of the ways to call external programs and the advantages and disadvantages of each:
os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example,
os.system("some_command < input_file | another_command > output_file")
However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs.
http://docs.python.org/lib/os-process.html
stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything.
http://docs.python.org/lib/os-newstreams.html
The Popen class of the subprocess module. This is intended as a replacement for os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say
print Popen("echo Hello World", stdout=PIPE, shell=True).stdout.read()
instead of
print os.popen("echo Hello World").read()
but it is nice to have all of the options there in one unified class instead of 4 different popen functions.
http://docs.python.org/lib/node528.html
The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply wait until the command completes and gives you the return code. For example:
return_code = call("echo Hello World", shell=True)
http://docs.python.org/lib/node529.html
The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.
The subprocess module should probably be what you use.
I believe sort will read all data in memory, so I'm not sure you will won anything but you can use shell=True in subprocess and use pipeline
>>> subprocess.check_output("ls", shell = True)
'1\na\na.cpp\nA.java\na.php\nerase_no_module.cpp\nerase_no_module.cpp~\nWeatherSTADFork.cpp\n'
>>> subprocess.check_output("ls | grep j", shell = True)
'A.java\n'
Warning
Invoking the system shell with shell=True can be a security hazard if combined with untrusted input. See the warning under Frequently Used Arguments for details.
I have a ruby script that gets executed by a python script. From within the python script I want to access to return value of the ruby function.
Imagine, I would have this ruby script test.rb:
class TestClass
def self.test_function(some_var)
if case1
puts "This may take some time"
# something is done here with some_var
puts "Finished"
else
# just do something short with some_var
end
return some_var
end
end
Now, I want to get the return value of that function into my python script, the printed output should go to stdout.
I tried the following (example 1):
from subprocess import call
answer = call(["ruby", "-r", "test.rb", "-e", "puts TestClass.test_function('some meaningful text')"])
However, this gives me the whole output on stdout and answer is just the exit code.
Therefore i tried this (example 2):
from subprocess import check_output
answer = check_output(["ruby", "-r", "test.rb", "-e", "puts TestClass.test_function('some meaningful text')"])
This gives me the return value of the function in the else case (see test.rb) almost immediately. However, if case1 is true, answer contains the whole output, but while running test.rb nothing gets printed.
Is there any way to get the return value of the ruby function and the statements printed to stdout? Ideally, the solution requires no additional modules to install. Furthermore, I can't change the ruby code.
Edit:
Also tried this, but this also gives no output on stdout while running the ruby script (example 3):
import subprocess
process = subprocess.Popen(["ruby", "-r", "test.rb", "-e", "puts TestClass.test_function('some meaningful text')"], stdout=subprocess.PIPE)
answer = process.communicate()
I also think that this is no matter of flushing the output to stdout in the ruby script. Example 1 gives me the output immediately.
Another way of doing this, without trying to call the ruby script as an external process is to set up a xmlrpc (or jsonrpc) server with the Ruby script, and call the remote functions from Python jsonrpc client (or xmlrpc)- the value would be available inside the Python program, nad even the sntax used would be just like you were dealing with a normal Python function.
Setting up such a server to expose a couple of functions remotely is very easy in Python, and should be the same from Ruby, but I had never tried it.
Check out http://docs.python.org/library/subprocess.html#popen-constructor and look into the ruby means of flushing stdout.
Hi I am trying to execute shell script from python using following command.
os.system("sh myscript.sh")
in my shell script I have written some SOP's, now how do I get the SOP's in my Python so that I can log them into some file?
I know using subprocess.Popen I can do it, for some reason I can not use it.
p=subprocess.Popen(
'DMEARAntRunner \"'+mount_path+'\"',
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
while 1:
line=p.stdout.readline()[:-1]
if not line:
break
write_to_log('INFO',line)
p.communicate()
If I understand your question correctly, you want something like this:
import subprocess
find_txt_command = ['find', '-maxdepth', '2', '-name', '*.txt']
with open('mylog.log', 'w') as logfile:
subprocess.call(find_txt_command, stdout=logfile, shell=False)
You can use Popen instead of call if you need to, the syntax is very similar. Notice that command is a list with the process you want to run and the arguments. In general you want to use Popen/call with shell=False, it prevents unexpected behavior that can be hard to debug and it is more portable.
Kindly check this official documentation which uses the subprocess module in python. It is currently the recommended way over os.system calls to execute system functions and retrieve the results. The link above gives examples very close to what you need.
I personally would advise you to leave the shell argument at its default value of False. In that case, the first argument isn't a string as you'd type into a terminal, but a list of "words", the first being the program, the ones after that being arguments. This means that there is no need to quote arguments, making your program more resilient to whitespace arguments and injection attacks.
This should do the trick:
p = subsprocess.Popen(['DMEARAntRunner', mount_path],
stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
As always with executing shell commands the question remains whether it's the easiest/best way to solve a problem, but that's another discussion altogether.
I want to spawn (fork?) multiple Python scripts from my program (written in Python as well).
My problem is that I want to dedicate one terminal to each script, because I'll gather their output using pexpect.
I've tried using pexpect, os.execlp, and os.forkpty but neither of them do as I expect.
I want to spawn the child processes and forget about them (they will process some data, write the output to the terminal which I could read with pexpect and then exit).
Is there any library/best practice/etc. to accomplish this job?
p.s. Before you ask why I would write to STDOUT and read from it, I shall say that I don't write to STDOUT, I read the output of tshark.
See the subprocess module
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, such as:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
From Python 3.5 onwards you can do:
import subprocess
result = subprocess.run(['python', 'my_script.py', '--arg1', val1])
if result.returncode != 0:
print('script returned error')
This also automatically redirects stdout and stderr.
I don't understand why you need expect for this. tshark should send its output to stdout, and only for some strange reason would it send it to stderr.
Therefore, what you want should be:
import subprocess
fp= subprocess.Popen( ("/usr/bin/tshark", "option1", "option2"), stdout=subprocess.PIPE).stdout
# now, whenever you are ready, read stuff from fp
You want to dedicate one terminal or one python shell?
You already have some useful answers for Popen and Subprocess, you could also use pexpect if you're already planning on using it anyways.
#for multiple python shells
import pexpect
#make your commands however you want them, this is just one method
mycommand1 = "print 'hello first python shell'"
mycommand2 = "print 'this is my second shell'"
#add a "for" statement if you want
child1 = pexpect.spawn('python')
child1.sendline(mycommand1)
child2 = pexpect.spawn('python')
child2.sendline(mycommand2)
Make as many children/shells as you want and then use the child.before() or child.after() to get your responses.
Of course you would want to add definitions or classes to be sent instead of "mycommand1", but this is just a simple example.
If you wanted to make a bunch of terminals in linux, you just need to replace the 'python' in the pextpext.spawn line
Note: I haven't tested the above code. I'm just replying from past experience with pexpect.