This question already has answers here:
How do I execute a program or call a system command?
(65 answers)
Closed 9 years ago.
I want to perform various Linux commands/operations using python script. I will be using the output, verifying/processing it and continue with some more commands execution in my script, may be remote execution also sometimes.
I have tried with both os and subprocess modules. The caveat here is, I am not able to combine both of them i.e. system calls or commands executed from one module does not affect "program/python" environment variables rather only considered by that particular module.
For. ex.
os.chdir(dirname)
os.system(cmd)
# p = subprocess.Popen(cmd)
Now, here changes from os.chdir are not useful for subprocess call. We have to stick with any one of them. If I use subprocess, I have to pass/create shell commands for it.
Added: cwd= is a solution for subprocess.Popen but every time I would have to pass option cwd as argument to future commands, if they all should be run from that dir
Is there a better way where we can use both of these modules together?
Or
Is there any other better module available for command executions.
Also I would like to know "Pros-Cons/Caveats" of both these modules.
os.system always runs /bin/sh, which parses the command string. This can be a security risk if you have whitespace, $ etc. in the command arguments, or the user has a shell config file. To avoid all such risks, use subprocess with a list or tuple of strings as the command (shell=False) instead.
To emulate os.chdir in the command, use the cwd= argument in subprocess.
Related
This question already has answers here:
Assign environment variables from bash script to current session from Python
(3 answers)
Closed 5 years ago.
I have a script called "script.sh," whose contents are:
#!/bin/sh
export A=5
I want to execute this script from within python (iPython actually) and read the variable 'A'.
import os
import subprocess
subprocess.call('./script.sh')
A=os.environ['A']
Unfortunately, this doesn't seem to work, giving me an error that A cannot be found. If I understand correctly, subprocess is actually running in a different shell than the one that os.environ queries. But then why can't I run something like:
subprocess.call('echo $A')
?
What should I change to make this work? In general, I just want to obtain the value of "A" from the script, preferably by executing the script through some form of shell (the actual script is quite long).
For some more info, the script will contain login credentials, so ideally I'd like a safe,minimalist way of accessing their values.
You need to source the script in the subshell (so it sets the variable in the same shell process), then echo the variable in that subshell:
a = subprocess.check_output('source ./script.sh; echo "$A"', shell=True)
Then you can read from the pipe to get the value of the variable.
The trick will be to spawn a new shell and tell it to both interpret your script's code, and can print out its environment in a way that Python can read it. A one-liner:
In [10]: subprocess.check_output(["bash", "-c", "source ./script.sh; env"])
Out[10]: '...\nA=5\n...'
What's happening: In general, environment variables are set at the beginning of a program, and any subprocesses can't modify their parent's environment; it's a sort of sandbox. But source is a bash builtin where bash says "instead of spawning script.sh as a new (sub-)subprocess which couldn't modify my environ, run the lines of code as myself (bash) and modify my environ accordingly for future commands". And env is tacked on so that bash prints the environment separated by newlines. check_output simply grabs that output and brings it back into Python.
(As a side note, that source command is what you use to update a shell to use a certain virtualenv: source my_project/bin/activate. Then the $PATH and other variables of your current shell are updated to use the virtualenv python and libraries for the rest of that session. You can't just say my_project/bin/activate since it would set them in a subshell, doing nothing :))
This question already has answers here:
Running Bash commands in Python
(11 answers)
Closed 2 years ago.
I am trying to run both Python and bash commands in a bash script.
In the bash script, I want to execute some bash commands enclosed by a Python loop:
#!/bin/bash
python << END
for i in range(1000):
#execute‬ some bash command such as echoing i
END
How can I do this?
Use subprocess, e.g.:
import subprocess
# ...
subprocess.call(["echo", i])
There is another function like subprocess.call: subprocess.check_call. It is exactly like call, just that it throws an exception if the command executed returned with a non-zero exit code. This is often feasible behaviour in scripts and utilities.
subprocess.check_output behaves the same as check_call, but returns the standard output of the program.
If you do not need shell features (such as variable expansion, wildcards, ...), never use shell=True (shell=False is the default). If you use shell=True then shell escaping is your job with these functions and they're a security hole if passed unvalidated user input.
The same is true of os.system() -- it is a frequent source of security issues. Don't use it.
Look in to the subprocess module. There is the Popen method and some wrapper functions like call.
If you need to check the output (retrieve the result string):
output = subprocess.check_output(args ....)
If you want to wait for execution to end before proceeding:
exitcode = subprocess.call(args ....)
If you need more functionality like setting environment variables, use the underlying Popen constructor:
subprocess.Popen(args ...)
Remember subprocess is the higher level module. It should replace legacy functions from OS module.
I used this when running from my IDE (PyCharm).
import subprocess
subprocess.check_call('mybashcommand', shell=True)
I am trying to compile a C program using Python and want to give input using "<" operator but it's not working as expected.
If I compile the C program and run it by giving input though a file it works; for example
./a.out <inp.txt works
But similarly if I try to do this using a Python script, it did not quite work out as expected.
For example:
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x"])
and
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x","<inp.txt"])
Both script ask for input though terminal. But I think in the second script it should read from file. why both the programs are working the same?
To complement #Jonathan Leffler's and #alastair's helpful answers:
Assuming you control the string you're passing to the shell for execution, I see nothing wrong with using the shell for convenience. [1]
subprocess.call() has an optional Boolean shell parameter, which causes the command to be passed to the shell, enabling I/O redirection, referencing environment variables, ...:
subprocess.call("./x <inp.txt", shell = True)
Note how the entire command line is passed as a single string rather than an array of arguments.
[1]
Avoid use of the shell in the following cases:
If your Python code must run on platforms other than Unix-like ones, such as Windows.
If performance is paramount.
If you find yourself "outsourcing" tasks better handled on the Python side.
If you're concerned about lack of predictability of the shell environment (as #alastair is):
subprocess.call with shell = True always creates non-interactive non-login instances of /bin/sh - note that it is NOT the user's default shell that is used.
sh does NOT read initialization files for non-interactive non-login shells (neither system-wide nor user-specific ones).
Note that even on platforms where sh is bash in disguise, bash will act this way when invoked as sh.
Every shell instance created with subprocess.call with shell = True is its own world, and its environment is neither influenced by previous shell instances nor does it influence later ones.
However, the shell instances created do inherit the environment of the python process itself:
If you started your Python program from an interactive shell, then that shell's environment is inherited. Note that this only pertains to the current working directory and environment variables, and NOT to aliases, shell functions, and shell variables.
Generally, that's a feature, given that Python (CPython) itself is designed to be controllable via environment variables (for 2.x, see https://docs.python.org/2/using/cmdline.html#environment-variables; for 3.x, see https://docs.python.org/3/using/cmdline.html#environment-variables).
If needed, you can supply your own environment to the shell via the env parameter; note, however, that you'll have to supply the entire environment in that event, potentially including variables such as USER and HOME, if needed; simple example, defining $PATH explicitly:
subprocess.call('echo $PATH', shell = True, \
env = { 'PATH': '/sbin:/bin:/usr/bin' })
The shell does I/O redirection for a process. Based on what you're saying, the subprocess module does not do I/O redirection like that. To demonstrate, run:
subprocess.call(["sh","-c", "./x <inp.txt"])
That runs the shell and should redirect the I/O. With your code, your program ./x is being given an argument <inp.txt which it is ignoring.
NB: the alternative call to subprocess.call is purely for diagnostic purposes, not a recommended solution. The recommended solution involves reading the (Python 2) subprocess module documentation (or the Python 3 documentation for it) to find out how to do the redirection using the module.
import subprocess
i_file = open("inp.txt")
subprocess.call("./x", stdin=i_file)
i_file.close()
If your script is about to exit so you don't have to worry about wasted file descriptors, you can compress that to:
import subprocess
subprocess.call("./x", stdin=open("inp.txt"))
By default, the subprocess module does not pass the arguments to the shell. Why? Because running commands via the shell is dangerous; unless they're correctly quoted and escaped (which is complicated), it is often possible to convince programs that do this kind of thing to run unwanted and unexpected shell commands.
Using the shell for this would be wrong anyway. If you want to take input from a particular file, you can use subprocess.Popen, setting the stdin argument to a file descriptor for the file inp.txt (you can get the file descriptor by calling fileno() a Python file object).
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Calling an external command in Python
I want to run commands in another directory using python.
What are the various ways used for this and which is the most efficient one?
What I want to do is as follows,
cd dir1
execute some commands
return
cd dir2
execute some commands
Naturally if you only want to run a (simple) command on the shell via python, you do it via the system function of the os module. For instance:
import os
os.system('touch myfile')
If you would want something more sophisticated that allows for even greater control over the execution of the command, go ahead and use the subprocess module that others here have suggested.
For further information, follow these links:
Python official documentation on os.system()
Python official documentation on the subprocess module
If you want more control over the called shell command (i.e. access to stdin and/or stdout pipes or starting it asynchronously), you can use the subprocessmodule:
import subprocess
p = subprocess.Popen('ls -al', shell=True, stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
See also subprocess module documentation.
os.system("/dir/to/executeble/COMMAND")
for example
os.system("/usr/bin/ping www.google.com")
if ping program is located in "/usr/bin"
Naturally you need to import the os module.
os.system does not wait for any output, if you want output, you should use
subprocess.call or something like that
You can use Python Subprocess ,which offers many modules to execute commands, checking outputs and receive error messages etc.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Python subprocess wildcard usage
Using the Python 2.6 subprocess module, I need to run a command on a src.rpm file that I am building with a previous subprocess call.
Unfortunately, I am working with spec files that are not consistent, so I only have a vague idea of what the filename of the src.rpm should look like (for instance, I know the name of the package and the extension in something named "{package}-{version}.src.rpm" but not the version).
I do know, however, that I will only have one src.rpm file in the directory that I am looking, so I can call mock with a command like
mock {options} *.src.rpm
and have it work in shell, but subprocess doesn't seem to want to accept the expansion. I've tried using (shell=True) as an argument to subprocess.call() but even if it worked I would rather avoid it.
How do I get something like
subprocess.call("mock *.src.rpm".split())
to run?
Use the glob package:
import subprocess
from glob import glob
subprocess.call(["mock"] + glob("*.src.rpm"))
The wildcard * has to be interpreted by the SHELL. When you run subprocess.call, by default it doesn't load a shell, but you can give it shell=True as an argument:
subprocess.call("mock *.src.rpm".split(), shell=True)