How to execute a command prompt command from python - python

I tried something like this, but with no effect:
command = "cmd.exe"
proc = subprocess.Popen(command, stdin = subprocess.PIPE, stdout = subprocess.PIPE)
proc.stdin.write("dir c:\\")

how about simply:
import os
os.system('dir c:\\')

You probably want to try something like this:
command = "cmd.exe /C dir C:\\"
I don't think you can pipe into cmd.exe... If you are coming from a unix background, well, cmd.exe has some ugly warts!
EDIT: According to Sven Marnach, you can pipe to cmd.exe. I tried following in a python shell:
>>> import subprocess
>>> proc = subprocess.Popen('cmd.exe', stdin = subprocess.PIPE, stdout = subprocess.PIPE)
>>> stdout, stderr = proc.communicate('dir c:\\')
>>> stdout
'Microsoft Windows [Version 6.1.7600]\r\nCopyright (c) 2009 Microsoft Corporatio
n. All rights reserved.\r\n\r\nC:\\Python25>More? '
As you can see, you still have a bit of work to do (only the first line is returned), but you might be able to get this to work...

Try:
import os
os.popen("Your command here")

Using ' and " at the same time works great for me (Windows 10, python 3)
import os
os.system('"some cmd command here"')
for example to open my web browser I can use this:
os.system(r'"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"')
(Edit)
for an easier way to open your browser I can use this:
import webbrowser
webbrowser.open('website or leave it alone if you only want to open the
browser')

Try adding a call to proc.stdin.flush() after writing to the pipe and see if things start behaving more as you expect. Explicitly flushing the pipe means you don't need to worry about exactly how the buffering is set up.
Also, don't forget to include a "\n" at the end of your command or your child shell will sit there at the prompt waiting for completion of the command entry.
I wrote about using Popen to manipulate an external shell instance in more detail at: Running three commands in the same process with Python
As was the case in that question, this trick can be valuable if you need to maintain shell state across multiple out-of-process invocations on a Windows machine.

Taking some inspiration from Daren Thomas's answer (and edit), try this:
proc = subprocess.Popen('dir C:\\', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = proc.communicate()
out will now contain the text output.
They key nugget here is that the subprocess module already provides you shell integration with shell=True, so you don't need to call cmd.exe directly.
As a reminder, if you're in Python 3, this is going to be bytes, so you may want to do out.decode() to convert to a string.

Why do you want to call cmd.exe ? cmd.exe is a command line (shell). If you want to change directory, use os.chdir("C:\\"). Try not to call external commands if Python can provide it. In fact, most operating system commands are provide through the os module (and sys). I suggest you take a look at os module documentation to see the various methods available.

It's very simple. You need just two lines of code with just using the built-in function and also it takes the input and runs forever until you stop it. Also that 'cmd' in quotes, leave it and don't change it. Here is the code:
import os
os.system('cmd')
Now just run this code and see the whole windows command prompt in your python project!

Here's a way to just execute a command line command and get its output using the subprocess module:
import subprocess
# You can put the parts of your command in the list below or just use a string directly.
command_to_execute = ["echo", "Test"]
run = subprocess.run(command_to_execute, capture_output=True)
print(run.stdout) # the output "Test"
print(run.stderr) # the error part of the output
Just don't forget the capture_output=True argument and you're fine. Also, you will get the output as a binary string (b"something" in Python), but you can easily convert it using run.stdout.decode().

In Python, you can use CMD commands using these lines :
import os
os.system("YOUR_COMMAND_HERE")
Just replace YOUR_COMMAND_HERE with the command you like.

From Python you can do directly using below code
import subprocess
proc = subprocess.check_output('C:\Windows\System32\cmd.exe /k %windir%\System32\\reg.exe ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v EnableLUA /t REG_DWORD /d 0 /f' ,stderr=subprocess.STDOUT,shell=True)
print(str(proc))
in first parameter just executed User Account setting you may customize with yours.

Related

Executing bash command within a python script called from crontab

I am writing a python script, I want to call from crontab. It script calls the xrandr command and saves its output in a variable like so:
output = subprocess.run('xrandr', shell=True, stdout=subprocess.PIPE).stdout.decode('utf-8')
I want the output of xrandr to be saved in a string.
This works all fine if I execute it from terminal, but if I run it using cron, the variable output stays empty.
the rest of the code is executed normally, so cron isn't the problem.
So how can I make this command execute properly?
thank you for your suggestions.
You want to store output, you can use communicate() here to help out, so like this:
from subprocess import PIPE
output = subprocess.run('xrandr', shell=True, stdout=subprocess.PIPE).stdout.decode('utf-8')
text = output.communicate()[0]
print(text)
OR maybe this, in that case you can remove the .stdout.decode('utf-8') not too sure but it give a shot with and without it:
from subprocess import PIPE
output = subprocess.run('xrandr', shell=True, stdout=subprocess.PIPE).stdout.decode('utf-8')
print(output.stdout)
I guess in cron environment the PATH variable is not set, so you should provide absolute path to xrandr (you can find it by which xrandr).
E.g. if this path /usr/bin/xrandr try
from subprocess import PIPE
output = subprocess.run('/usr/bin/xrandr', shell=True, stdout=subprocess.PIPE).stdout.decode('utf-8')
text = output.communicate()[0]
print(text)
The better way for my opinion is to capture stderr as well and log error when it occures.

How to run a command line in txt file from Jupyter Notebook (python 3.8) and get output? [duplicate]

How do I call an external command within Python as if I had typed it in a shell or command prompt?
Use the subprocess module in the standard library:
import subprocess
# for simple commands
subprocess.run(["ls", "-l"])
# for complex commands, with many args, use string + `shell=True`:
cmd_str = "ls -l /tmp | awk '{print $3,$9}' | grep root"
subprocess.run(cmd_str, shell=True)
The advantage of subprocess.run over os.system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...).
Even the documentation for os.system recommends using subprocess instead:
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the subprocess documentation for some helpful recipes.
On Python 3.4 and earlier, use subprocess.call instead of .run:
subprocess.call(["ls", "-l"])
Here is a summary of ways to call external programs, including their advantages and disadvantages:
os.system passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:
os.system("some_command < input_file | another_command > output_file")
However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, et cetera. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs.
os.popen will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything. Example:
print(os.popen("ls -l").read())
subprocess.Popen. This is intended as a replacement for os.popen, but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say:
print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
instead of
print os.popen("echo Hello World").read()
but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See the documentation.
subprocess.call. This is basically just like the Popen class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:
return_code = subprocess.call("echo Hello World", shell=True)
subprocess.run. Python 3.5+ only. Similar to the above but even more flexible and returns a CompletedProcess object when the command finishes executing.
os.fork, os.exec, os.spawn are similar to their C language counterparts, but I don't recommend using them directly.
The subprocess module should probably be what you use.
Finally, please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:
print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()
and imagine that the user enters something "my mama didnt love me && rm -rf /" which could erase the whole filesystem.
Typical implementation:
import subprocess
p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line,
retval = p.wait()
You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().
Some hints on detaching the child process from the calling one (starting the child process in background).
Suppose you want to start a long task from a CGI script. That is, the child process should live longer than the CGI script execution process.
The classical example from the subprocess module documentation is:
import subprocess
import sys
# Some code here
pid = subprocess.Popen([sys.executable, "longtask.py"]) # Call subprocess
# Some more code here
The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.
My target platform was FreeBSD, but the development was on Windows, so I faced the problem on Windows first.
On Windows (Windows XP), the parent process will not finish until the longtask.py has finished its work. It is not what you want in a CGI script. The problem is not specific to Python; in the PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in Windows API.
If you happen to have installed pywin32, you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
/* UPD 2015.10.27 #eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */
On FreeBSD we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in a CGI script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:
pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
I have not checked the code on other platforms and do not know the reasons of the behaviour on FreeBSD. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.
import os
os.system("your command")
Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant documentation on the 'os' and 'sys' modules. There are a bunch of functions (exec* and spawn*) that will do similar things.
I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer.
subprocess.call(['ping', 'localhost'])
import os
cmd = 'ls -al'
os.system(cmd)
If you want to return the results of the command, you can use os.popen. However, this is deprecated since version 2.6 in favor of the subprocess module, which other answers have covered well.
There are lots of different libraries which allow you to call external commands with Python. For each library I've given a description and shown an example of calling an external command. The command I used as the example is ls -l (list all files). If you want to find out more about any of the libraries I've listed and linked the documentation for each of them.
Sources
subprocess: https://docs.python.org/3.5/library/subprocess.html
shlex: https://docs.python.org/3/library/shlex.html
os: https://docs.python.org/3.5/library/os.html
sh: https://amoffat.github.io/sh/
plumbum: https://plumbum.readthedocs.io/en/latest/
pexpect: https://pexpect.readthedocs.io/en/stable/
fabric: http://www.fabfile.org/
envoy: https://github.com/kennethreitz/envoy
commands: https://docs.python.org/2/library/commands.html
These are all the libraries
Hopefully this will help you make a decision on which library to use :)
subprocess
Subprocess allows you to call external commands and connect them to their input/output/error pipes (stdin, stdout, and stderr). Subprocess is the default choice for running commands, but sometimes other modules are better.
subprocess.run(["ls", "-l"]) # Run command
subprocess.run(["ls", "-l"], stdout=subprocess.PIPE) # This will run the command and return any output
subprocess.run(shlex.split("ls -l")) # You can also use the shlex library to split the command
os
os is used for "operating system dependent functionality". It can also be used to call external commands with os.system and os.popen (Note: There is also a subprocess.popen). os will always run the shell and is a simple alternative for people who don't need to, or don't know how to use subprocess.run.
os.system("ls -l") # Run command
os.popen("ls -l").read() # This will run the command and return any output
sh
sh is a subprocess interface which lets you call programs as if they were functions. This is useful if you want to run a command multiple times.
sh.ls("-l") # Run command normally
ls_cmd = sh.Command("ls") # Save command as a variable
ls_cmd() # Run command as if it were a function
plumbum
plumbum is a library for "script-like" Python programs. You can call programs like functions as in sh. Plumbum is useful if you want to run a pipeline without the shell.
ls_cmd = plumbum.local("ls -l") # Get command
ls_cmd() # Run command
pexpect
pexpect lets you spawn child applications, control them and find patterns in their output. This is a better alternative to subprocess for commands that expect a tty on Unix.
pexpect.run("ls -l") # Run command as normal
child = pexpect.spawn('scp foo user#example.com:.') # Spawns child application
child.expect('Password:') # When this is the output
child.sendline('mypassword')
fabric
fabric is a Python 2.5 and 2.7 library. It allows you to execute local and remote shell commands. Fabric is simple alternative for running commands in a secure shell (SSH)
fabric.operations.local('ls -l') # Run command as normal
fabric.operations.local('ls -l', capture = True) # Run command and receive output
envoy
envoy is known as "subprocess for humans". It is used as a convenience wrapper around the subprocess module.
r = envoy.run("ls -l") # Run command
r.std_out # Get output
commands
commands contains wrapper functions for os.popen, but it has been removed from Python 3 since subprocess is a better alternative.
With the standard library
Use the subprocess module (Python 3):
import subprocess
subprocess.run(['ls', '-l'])
It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tedious to construct and write.
Note on Python version: If you are still using Python 2, subprocess.call works in a similar way.
ProTip: shlex.split can help you to parse the command for run, call, and other subprocess functions in case you don't want (or you can't!) provide them in form of lists:
import shlex
import subprocess
subprocess.run(shlex.split('ls -l'))
With external dependencies
If you do not mind external dependencies, use plumbum:
from plumbum.cmd import ifconfig
print(ifconfig['wlan0']())
It is the best subprocess wrapper. It's cross-platform, i.e. it works on both Windows and Unix-like systems. Install by pip install plumbum.
Another popular library is sh:
from sh import ifconfig
print(ifconfig('wlan0'))
However, sh dropped Windows support, so it's not as awesome as it used to be. Install by pip install sh.
I always use fabric for doing these things. Here is a demo code:
from fabric.operations import local
result = local('ls', capture=True)
print "Content:/n%s" % (result, )
But this seems to be a good tool: sh (Python subprocess interface).
Look at an example:
from sh import vgdisplay
print vgdisplay()
print vgdisplay('-v')
print vgdisplay(v=True)
Check the "pexpect" Python library, too.
It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet, etc. You can just type something like:
child = pexpect.spawn('ftp 192.168.0.24')
child.expect('(?i)name .*: ')
child.sendline('anonymous')
child.expect('(?i)password')
If you need the output from the command you are calling,
then you can use subprocess.check_output (Python 2.7+).
>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
Also note the shell parameter.
If shell is True, the specified command will be executed through the shell. This can be useful if you are using Python primarily for the enhanced control flow it offers over most system shells and still want convenient access to other shell features such as shell pipes, filename wildcards, environment variable expansion, and expansion of ~ to a user’s home directory. However, note that Python itself offers implementations of many shell-like features (in particular, glob, fnmatch, os.walk(), os.path.expandvars(), os.path.expanduser(), and shutil).
Update:
subprocess.run is the recommended approach as of Python 3.5 if your code does not need to maintain compatibility with earlier Python versions. It's more consistent and offers similar ease-of-use as Envoy. (Piping isn't as straightforward though. See this question for how.)
Here's some examples from the documentation.
Run a process:
>>> subprocess.run(["ls", "-l"]) # Doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)
Raise on failed run:
>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1
Capture output:
>>> subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n')
Original answer:
I recommend trying Envoy. It's a wrapper for subprocess, which in turn aims to replace the older modules and functions. Envoy is subprocess for humans.
Example usage from the README:
>>> r = envoy.run('git config', data='data to pipe in', timeout=2)
>>> r.status_code
129
>>> r.std_out
'usage: git config [options]'
>>> r.std_err
''
Pipe stuff around too:
>>> r = envoy.run('uptime | pbcopy')
>>> r.command
'pbcopy'
>>> r.status_code
0
>>> r.history
[<Response 'uptime'>]
This is how I run my commands. This code has everything you need pretty much
from subprocess import Popen, PIPE
cmd = "ls -l ~/"
p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
print "Return code: ", p.returncode
print out.rstrip(), err.rstrip()
How to execute a program or call a system command from Python
Simple, use subprocess.run, which returns a CompletedProcess object:
>>> from subprocess import run
>>> from shlex import split
>>> completed_process = run(split('python --version'))
Python 3.8.8
>>> completed_process
CompletedProcess(args=['python', '--version'], returncode=0)
(run wants a list of lexically parsed shell arguments - this is what you'd type in a shell, separated by spaces, but not where the spaces are quoted, so use a specialized function, split, to split up what you would literally type into your shell)
Why?
As of Python 3.5, the documentation recommends subprocess.run:
The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle. For more advanced use cases, the underlying Popen interface can be used directly.
Here's an example of the simplest possible usage - and it does exactly as asked:
>>> from subprocess import run
>>> from shlex import split
>>> completed_process = run(split('python --version'))
Python 3.8.8
>>> completed_process
CompletedProcess(args=['python', '--version'], returncode=0)
run waits for the command to successfully finish, then returns a CompletedProcess object. It may instead raise TimeoutExpired (if you give it a timeout= argument) or CalledProcessError (if it fails and you pass check=True).
As you might infer from the above example, stdout and stderr both get piped to your own stdout and stderr by default.
We can inspect the returned object and see the command that was given and the returncode:
>>> completed_process.args
['python', '--version']
>>> completed_process.returncode
0
Capturing output
If you want to capture the output, you can pass subprocess.PIPE to the appropriate stderr or stdout:
>>> from subprocess import PIPE
>>> completed_process = run(shlex.split('python --version'), stdout=PIPE, stderr=PIPE)
>>> completed_process.stdout
b'Python 3.8.8\n'
>>> completed_process.stderr
b''
And those respective attributes return bytes.
Pass a command list
One might easily move from manually providing a command string (like the question suggests) to providing a string built programmatically. Don't build strings programmatically. This is a potential security issue. It's better to assume you don't trust the input.
>>> import textwrap
>>> args = ['python', textwrap.__file__]
>>> cp = run(args, stdout=subprocess.PIPE)
>>> cp.stdout
b'Hello there.\n This is indented.\n'
Note, only args should be passed positionally.
Full Signature
Here's the actual signature in the source and as shown by help(run):
def run(*popenargs, input=None, timeout=None, check=False, **kwargs):
The popenargs and kwargs are given to the Popen constructor. input can be a string of bytes (or unicode, if specify encoding or universal_newlines=True) that will be piped to the subprocess's stdin.
The documentation describes timeout= and check=True better than I could:
The timeout argument is passed to Popen.communicate(). If the timeout
expires, the child process will be killed and waited for. The
TimeoutExpired exception will be re-raised after the child process has
terminated.
If check is true, and the process exits with a non-zero exit code, a
CalledProcessError exception will be raised. Attributes of that
exception hold the arguments, the exit code, and stdout and stderr if
they were captured.
and this example for check=True is better than one I could come up with:
>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1
Expanded Signature
Here's an expanded signature, as given in the documentation:
subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None,
shell=False, cwd=None, timeout=None, check=False, encoding=None,
errors=None)
Note that this indicates that only the args list should be passed positionally. So pass the remaining arguments as keyword arguments.
Popen
When use Popen instead? I would struggle to find use-case based on the arguments alone. Direct usage of Popen would, however, give you access to its methods, including poll, 'send_signal', 'terminate', and 'wait'.
Here's the Popen signature as given in the source. I think this is the most precise encapsulation of the information (as opposed to help(Popen)):
def __init__(self, args, bufsize=-1, executable=None,
stdin=None, stdout=None, stderr=None,
preexec_fn=None, close_fds=True,
shell=False, cwd=None, env=None, universal_newlines=None,
startupinfo=None, creationflags=0,
restore_signals=True, start_new_session=False,
pass_fds=(), *, user=None, group=None, extra_groups=None,
encoding=None, errors=None, text=None, umask=-1, pipesize=-1):
But more informative is the Popen documentation:
subprocess.Popen(args, bufsize=-1, executable=None, stdin=None, stdout=None,
stderr=None, preexec_fn=None, close_fds=True, shell=False, cwd=None,
env=None, universal_newlines=None, startupinfo=None, creationflags=0,
restore_signals=True, start_new_session=False, pass_fds=(), *, group=None,
extra_groups=None, user=None, umask=-1, encoding=None, errors=None,
text=None)
Execute a child program in a new process. On POSIX, the class uses
os.execvp()-like behavior to execute the child program. On Windows,
the class uses the Windows CreateProcess() function. The arguments to
Popen are as follows.
Understanding the remaining documentation on Popen will be left as an exercise for the reader.
Use subprocess.
...or for a very simple command:
import os
os.system('cat testfile')
As of Python 3.7.0 released on June 27th 2018 (https://docs.python.org/3/whatsnew/3.7.html), you can achieve your desired result in the most powerful while equally simple way. This answer intends to show you the essential summary of various options in a short manner. For in-depth answers, please see the other ones.
TL;DR in 2021
The big advantage of os.system(...) was its simplicity. subprocess is better and still easy to use, especially as of Python 3.5.
import subprocess
subprocess.run("ls -a", shell=True)
Note: This is the exact answer to your question - running a command
like in a shell
Preferred Way
If possible, remove the shell overhead and run the command directly (requires a list).
import subprocess
subprocess.run(["help"])
subprocess.run(["ls", "-a"])
Pass program arguments in a list. Don't include \"-escaping for arguments containing spaces.
Advanced Use Cases
Checking The Output
The following code speaks for itself:
import subprocess
result = subprocess.run(["ls", "-a"], capture_output=True, text=True)
if "stackoverflow-logo.png" in result.stdout:
print("You're a fan!")
else:
print("You're not a fan?")
result.stdout is all normal program output excluding errors. Read result.stderr to get them.
capture_output=True - turns capturing on. Otherwise result.stderr and result.stdout would be None. Available from Python 3.7.
text=True - a convenience argument added in Python 3.7 which converts the received binary data to Python strings you can easily work with.
Checking the returncode
Do
if result.returncode == 127: print("The program failed for some weird reason")
elif result.returncode == 0: print("The program succeeded")
else: print("The program failed unexpectedly")
If you just want to check if the program succeeded (returncode == 0) and otherwise throw an Exception, there is a more convenient function:
result.check_returncode()
But it's Python, so there's an even more convenient argument check which does the same thing automatically for you:
result = subprocess.run(..., check=True)
stderr should be inside stdout
You might want to have all program output inside stdout, even errors. To accomplish this, run
result = subprocess.run(..., stderr=subprocess.STDOUT)
result.stderr will then be None and result.stdout will contain everything.
Using shell=False with an argument string
shell=False expects a list of arguments. You might however, split an argument string on your own using shlex.
import subprocess
import shlex
subprocess.run(shlex.split("ls -a"))
That's it.
Common Problems
Chances are high you just started using Python when you come across this question. Let's look at some common problems.
FileNotFoundError: [Errno 2] No such file or directory: 'ls -a': 'ls -a'
You're running a subprocess without shell=True . Either use a list (["ls", "-a"]) or set shell=True.
TypeError: [...] NoneType [...]
Check that you've set capture_output=True.
TypeError: a bytes-like object is required, not [...]
You always receive byte results from your program. If you want to work with it like a normal string, set text=True.
subprocess.CalledProcessError: Command '[...]' returned non-zero exit status 1.
Your command didn't run successfully. You could disable returncode checking or check your actual program's validity.
TypeError: init() got an unexpected keyword argument [...]
You're likely using a version of Python older than 3.7.0; update it to the most recent one available. Otherwise there are other answers in this Stack Overflow post showing you older alternative solutions.
os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.
Get more information here.
There is also Plumbum
>>> from plumbum import local
>>> ls = local["ls"]
>>> ls
LocalCommand(<LocalPath /bin/ls>)
>>> ls()
u'build.py\ndist\ndocs\nLICENSE\nplumbum\nREADME.rst\nsetup.py\ntests\ntodo.txt\n'
>>> notepad = local["c:\\windows\\notepad.exe"]
>>> notepad() # Notepad window pops up
u'' # Notepad window is closed by user, command returns
Use:
import os
cmd = 'ls -al'
os.system(cmd)
os - This module provides a portable way of using operating system-dependent functionality.
For the more os functions, here is the documentation.
It can be this simple:
import os
cmd = "your command"
os.system(cmd)
There is another difference here which is not mentioned previously.
subprocess.Popen executes the <command> as a subprocess. In my case, I need to execute file <a> which needs to communicate with another program, <b>.
I tried subprocess, and execution was successful. However <b> could not communicate with <a>.
Everything is normal when I run both from the terminal.
One more:
(NOTE: kwrite behaves different from other applications. If you try the below with Firefox, the results will not be the same.)
If you try os.system("kwrite"), program flow freezes until the user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite). This time program continued to flow, but kwrite became the subprocess of the console.
Anyone runs the kwrite not being a subprocess (i.e. in the system monitor it must appear at the leftmost edge of the tree).
os.system does not allow you to store results, so if you want to store results in some list or something, a subprocess.call works.
subprocess.check_call is convenient if you don't want to test return values. It throws an exception on any error.
I tend to use subprocess together with shlex (to handle escaping of quoted strings):
>>> import subprocess, shlex
>>> command = 'ls -l "/your/path/with spaces/"'
>>> call_params = shlex.split(command)
>>> print call_params
["ls", "-l", "/your/path/with spaces/"]
>>> subprocess.call(call_params)
I wrote a library for this, shell.py.
It's basically a wrapper for popen and shlex for now. It also supports piping commands, so you can chain commands easier in Python. So you can do things like:
ex('echo hello shell.py') | "awk '{print $2}'"
Under Linux, in case you would like to call an external command that will execute independently (will keep running after the Python script terminates), you can use a simple queue as task spooler or the at command.
An example with task spooler:
import os
os.system('ts <your-command>')
Notes about task spooler (ts):
You could set the number of concurrent processes to be run ("slots") with:
ts -S <number-of-slots>
Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.
In Windows you can just import the subprocess module and run external commands by calling subprocess.Popen(), subprocess.Popen().communicate() and subprocess.Popen().wait() as below:
# Python script to run a command line
import subprocess
def execute(cmd):
"""
Purpose : To execute a command and return exit status
Argument : cmd - command to execute
Return : exit_code
"""
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(result, error) = process.communicate()
rc = process.wait()
if rc != 0:
print "Error: failed to execute command:", cmd
print error
return result
# def
command = "tasklist | grep python"
print "This process detail: \n", execute(command)
Output:
This process detail:
python.exe 604 RDP-Tcp#0 4 5,660 K
Invoke is a Python (2.7 and 3.4+) task execution tool and library. It provides a clean, high-level API for running shell commands:
>>> from invoke import run
>>> cmd = "pip install -r requirements.txt"
>>> result = run(cmd, hide=True, warn=True)
>>> print(result.ok)
True
>>> print(result.stdout.splitlines()[-1])
Successfully installed invocations-0.13.0 pep8-1.5.7 spec-1.3.1
You can use Popen, and then you can check the procedure's status:
from subprocess import Popen
proc = Popen(['ls', '-l'])
if proc.poll() is None:
proc.kill()
Check out subprocess.Popen.

Popen with Context Managers

I have been trying to write a function which would execute a command passed to it thru a parameter using POPEN along with Context Managers. Unfortunately, I am unable to get it to work. Can someone please help?
import os
import sys
import subprocess
import inspect
def run_process(cmd_args):
with subprocess.Popen(cmd_args, stdout=subprocess.PIPE) as proc:
log.write(proc.stdout.read())
run_process("print('Hello')")
The output expected is "Hello". Can someone please point out where I am going wrong?
What you have done is right if you are running a bash command through the subprocess.
Inside the context manager "with ..." what you have done is to reading out the output from the terminal and storing them as byte(s) in "output" variable and trying to print out the bytes in ASCII after decoding it.
Try returning the value from the context manager and then decode it in the calling function:
import os
import sys
import subprocess
import inspect
def run_process(cmd_args): # Below added shell=True' in parameters.
with subprocess.Popen(cmd_args, stdout=subprocess.PIPE, shell=True) as proc:
return proc.stdout.read() # returns the output
# Optionally you can use the 'encoding='utf-8' argument
# instead and just print(proc.stdout.read()).
print(run_process().decode('utf-8'))
I was having a similar issue while pipelining a process to another program and I did the decoding in the other program and surprisingly it worked. Hope it works for you as well.
def run_process(cmd_args):
with subprocess.Popen(cmd_args, stdout=subprocess.PIPE) as p:
output = p.stdout.read()
return output
It worked for the same question.
Popen runs the command it receives as you would run something in your terminal (example: CMD on Windows or bash on Linux). So, it does not execute Python, but Bash code (on Linux for ex). The Python binary has a command, -c that does what you would need: executes a Python command right away. So you have to options:
either use echo Hello (works on Windows or Linux too, echo it's both
in batch and in bash)
or you could use python -c "print('Hello') instead of just the print command.
Without making too many changes to your existing script, I have edited your script with the below comments indicating what I did to get it to work. I hope this helps.
import os
import sys
import subprocess
import inspect
def run_process(cmd_args): # Below added shell=True' in parameters.
with subprocess.Popen(cmd_args, stdout=subprocess.PIPE, shell=True) as proc:
output = proc.stdout.read() # Reads the output from the process in bytes.
print(output.decode('utf-8')) # Converts bytes to UTF-8 format for readability.
# Optionally you can use the 'encoding='utf-8' argument
# instead and just print(proc.stdout.read()).
run_process("echo Hello") # To display the message in the prompt use 'echo' in your string like this.
Note: Read the Security Considerations section before using shell=True.
https://docs.python.org/3/library/subprocess.html#security-considerations

How to call a series of bash commands in python and store output

I am trying to run the following bash script in Python and store the readlist output. The readlist that I want to be stored as a python list, is a list of all files in the current directory ending in *concat_001.fastq.
I know it may be easier to do this in python (i.e.
import os
readlist = [f for f in os.listdir(os.getcwd()) if f.endswith("concat_001.fastq")]
readlist = sorted(readlist)
However, this is problematic, as I need Python to sort the list in EXACTLY the same was as bash, and I was finding that bash and Python sort certain things in different orders (eg Python and bash deal with capitalised and uncapitalised things differently - but when I tried
readlist = np.asarray(sorted(flist, key=str.lower))
I still found that two files starting with ML_ and M_ were sorted in different order with bash and Python. Hence trying to run my exact bash script through Python, then to use the sorted list generated with bash in my subsequent Python code.
input_suffix="concat_001.fastq"
ender=`echo $input_suffix | sed "s/concat_001.fastq/\*concat_001.fastq/g" `
readlist="$(echo $ender)"
I have tried
proc = subprocess.call(command1, shell=True, stdout=subprocess.PIPE)
proc = subprocess.call(command2, shell=True, stdout=subprocess.PIPE)
proc = subprocess.Popen(command3, shell=True, stdout=subprocess.PIPE)
But I just get: subprocess.Popen object at 0x7f31cfcd9190
Also - I don't understand the difference between subprocess.call and subprocess.Popen. I have tried both.
Thanks,
Ruth
So your question is a little confusing and does not exactly explain what you want. However, I'll try to give some suggestions to help you update it, or in my effort, answer it.
I will assume the following: your python script is passing to the command line 'input_suffix' and that you want your python program to receive the contents of 'readlist' when the external script finishes.
To make our lives simpler, and allow things to be more complicated, I would make the following bash script to contain your commands:
script.sh
#!/bin/bash
input_suffix=$1
ender=`echo $input_suffix | sed "s/concat_001.fastq/\*concat_001.fastq/g"`
readlist="$(echo $ender)"
echo $readlist
You would execute this as script.sh "concat_001.fastq", where $1 takes in the first argument passed on the command line.
To use python to execute external scripts, as you quite rightly found, you can use subprocess (or as noted by another response, os.system - although subprocess is recommended).
The docs tell you that subprocess.call:
"Wait for command to complete, then return the returncode attribute."
and that
"For more advanced use cases when these do not meet your needs, use the underlying Popen interface."
Given you want to pipe the output from the bash script to your python script, let's use Popen as suggested by the docs. As I posted the other stackoverflow answer, it could look like the following:
import subprocess
from subprocess import Popen, PIPE
# Execute out script and pipe the output to stdout
process = subprocess.Popen(['script.sh', 'concat_001.fastq'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# Obtain the standard out, and standard error
stdout, stderr = process.communicate()
and then:
>>> print stdout
*concat_001.fastq

How to redirect python OS system call to a file?

I have no idea why the below code is not working. The file arch_list does not get created or anything written to it. The commands work fine when run in the terminal alone.
from yum.plugins import PluginYumExit , TYPE_CORE, TYPE_INTERACTIVE
import os
requires_api_version = '2.3'
plugin_type = (TYPE_CORE, TYPE_INTERACTIVE)
ip_vm = ['192.168.239.133']
def get_arch():
global ip_vm
os.system("uname -p > ~/arch_list")
for i in ip_vm:
cmd = "ssh thejdeep#"+i+" 'uname -p' >> ~/arch_list"
print cmd
os.system(cmd)
def init_hook(conduit):
conduit.info(2,'Hello World !')
get_arch()
I don't think os.system() will return to stdout in that case. You may try using subprocess.call() with the appropriate parameters.
Edit: Actually I think I remember seeing similar behaviour with ssh when running in a standard bash loop. You might try adding a -n to your ssh call.. I think that is the solution I used years ago in bash.
I just ran your code and it works fine for me, writing to the local arch file. I suspect adding more than one host to your list is where you start having problems. What version of python are you running? I'm on 2.7.6.
os.system() will not redirect stdout and stderr.
You can use subprocess modules Popen to set the stdout and stderr to a file descriptor or a pipe.
For example:
>>> import subprocess
>>> child1 = subprocess.Popen(["ls","-l"], stdout=subprocess.PIPE)
>>> print child1.stdout.readlines()
You can replace subprocess.PIPE to any valid file descriptor you opened for write. or you could pick up some lines to the file. It's your call.

Categories

Resources