python command substitution for linux commands - python

I am trying to use command substitution for building a linux command from a python script, but am not able to get the following simple example to work:
LS="/bin/ls -l"
FILENAME="inventory.txt"
cmd = "_LS _FILENAME "
ps= subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
output = ps.communicate()[0]
print output
Thanks!
JB

Use string substitution:
cmd = '{} {}'.format(LS, FILENAME)
or (in Python2.6):
cmd = '{0} {1}'.format(LS, FILENAME)
import subprocess
import shlex
LS="/bin/ls -l"
FILENAME="inventory.txt"
cmd = '{} {}'.format(LS, FILENAME)
ps = subprocess.Popen(shlex.split(cmd),
stdout = subprocess.PIPE,
stderr = subprocess.STDOUT)
output, err = ps.communicate()
print(output)
Or, using the sh module:
import sh
FILENAME = 'inventory.txt'
print(sh.ls('-l', FILENAME, _err_to_out=True))

Related

Running a C executable inside a python program

I have written a C code where I have converted one file format to another file format. To run my C code, I have taken one command line argument : filestem.
I executed that code using : ./executable_file filestem > outputfile
Where I have got my desired output inside outputfile
Now I want to take that executable and run within a python code.
I am trying like :
import subprocess
import sys
filestem = sys.argv[1];
subprocess.run(['/home/dev/executable_file', filestem , 'outputfile'])
But it is unable to create the outputfile. I think some thing should be added to solve the > issue. But unable to figure out. Please help.
subprocess.run has optional stdout argument, you might give it file handle, so in your case something like
import subprocess
import sys
filestem = sys.argv[1]
with open('outputfile','wb') as f:
subprocess.run(['/home/dev/executable_file', filestem],stdout=f)
should work. I do not have ability to test it so please run it and write if it does work as intended
You have several options:
NOTE - Tested in CentOS 7, using Python 2.7
1. Try pexpect:
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import pexpect
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
command_output, exitstatus = pexpect.run("/usr/bin/bash -c '{0}'".format(cmd), withexitstatus=True)
if exitstatus == 0:
print(command_output)
else:
print("Houston, we've had a problem.")
2. Run subprocess with shell=true (Not recommended):
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import sys
import subprocess
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
result = subprocess.check_output(shlex.split(cmd), shell=True) # or subprocess.call(cmd, shell=True)
print(result)
It works, but python.org frowns upon this, due to the chance of a shell injection: see "Security Considerations" in the subprocess documentation.
3. If you must use subprocess, run each command separately and take the SDTOUT of the previous command and pipe it into the STDIN of the next command:
p = subprocess.Popen(cmd, stdin=PIPE, stdout=PIPE)
stdout_data, stderr_data = p.communicate()
p = subprocess.Popen(cmd, stdin=stdout_data, stdout=PIPE)
etc...
Good luck with your code!

Parse filename from Dumpcap output

I am trying to parse the filename from the ouput of running dumpcap in the terminal in linux in order to automatically attach it to an email. This is the relevant functions from a larger script. proc1, stdout, and eventfile are initialized to"" and DUMPCAP is the command line string dumpcap -a duration:300 -b duration:2147483647 -c 500 -i 1 -n -p -s 2 -w test -B 20
def startdump():
global DUMPCAP
global dumpdirectory
global proc1
global stdout
global eventfile
setDumpcapOptions()
print("dumpcap.exe = " + DUMPCAP)
os.chdir(dumpdirectory)
#subprocess.CREATE_NEW_CONSOLE
proc1 = subprocess.Popen(DUMPCAP, shell=True, stdout=subprocess.PIPE)
for line in proc1:
if 'File: ' in line:
parsedfile = line.split(':')
eventfile = parsedfile[1]
if dc_mode == "Dumpcap Only":
proc1.communicate()
mail_man(config_file)
return proc1
def startevent():
global EVENT
global proc1
global eventfile
setEventOptions()
print(EVENT)
# subprocess.CREATE_NEW_CONSOLE
proc2 = subprocess.Popen(EVENT, shell=True, stdout=subprocess.PIPE)
if dc_mode == "Dumpcap+Event" or dc_mode == "Trigger" or dc_mode == "Event Only":
proc2 = proc1.communicate()
mail_man(config_file)
return proc2
the problem I keep having is that I can't figure out how to parse the file name from the output of dumpcap. It keeps parsing ""from the output no matter what I do. I apologize if this seems unresearched. I am a month into learning python and linux on my own and the documentation is terse and confusing online.
Should I create a function to parse the eventfile from dumpcap's output or do it right there in the script? I'm truly at a loss here. I'm not sure how dumpcap stores its output either.
The output of dumcap in the terminal is:
dumpcap.exe = dumpcap -a duration:300 -b duration:2147483647 -c 500 -i 1 -n -p -s 2 -w test -B 20
-i 1 - f icmp and host 156.24.31.29 - c 2
/bin/sh: -i: command not found
Capturing on 'eth0'
File: test_00001_20150714141827
Packets captured: 500
Packets received/dropped on interface 'eth0': 500/0 (pcap:0/dumpcap:0/flushed:0/ps_ifdrop:0) (100.0%)
[Errno 2] No such file or directory: ''
the line File: ... contains the randomly generated name of the pcap file saved by dumpcap I am trying to parse that line from the terminal to get everything after the File: set to a variable but the conventional .split method doesn't seem to be working
The other error it gives is that Popen cannot be indexed
It looks like basically you need a regexp.
import re
rx = re.compile('^File: (\S+)$', re.MULTILINE)
m = rx.search(stdout_contents)
if m:
file_name = m.group(1)
# else: file name not detected.
Update: a minimal complete example of reading pipe's stdout; hope this helps.
import subprocess
proc = subprocess.Popen("echo foo", shell=True, stdout=subprocess.PIPE)
result = proc.communicate()
print result[0] # must be `foo\n`
Dumpcap outputs to its stderr as opposed to stdout. So I've managed to redirect the stderr to a txt file which I can then parse.
def startdump():
global DUMPCAP, dumpdirectory, proc1
global eventfile, dc_capfile
DUMPCAP = ''
print("========================[ MAIN DUMPCAP MONITORING ]===========================")
setDumpcapOptions()
os.chdir(dumpdirectory)
if platform == "Linux":
DUMPCAP = "dumpcap " + DUMPCAP
elif platform == "Windows":
DUMPCAP = "dumpcap.exe " + DUMPCAP
proc1 = subprocess.Popen(DUMPCAP, shell=True, stderr=subprocess.PIPE)
#procPID = proc1.pid
if dc_mode == "Dumpcap Only":
time.sleep(5)
with open("proc1stderr.txt", 'w+') as proc1stderr:
proc1stderr.write(str(proc1.stderr))
for line in proc1.stderr:
print("%s" % line)
if "File:" in line:
print(line)
raweventfile = line.split('File: ')[1]
eventfile = raweventfile.strip('\[]').rstrip('\n')
mail_man()
proc1.communicate()

Linux Command for Perl in Python

I am trying to this Linux command for Perl (v5.10.1) in Python (v2.6)
perl tilt.pl *.pdb > final.txt
What do I do so that I can apply the Perl script to every PDB file, examples would be best?
my current script is this:
import shlex, subprocess
arg_str = "perl tilt.pl frames > final.txt"
arg = shlex.split(arg_str)
print(arg)
import os
framespdb = os.listdir("prac_frames")
for frames in framespdb:
subprocess.Popen(arg, stdout=True)
You can use shell=True
BTW: I think you try to put variable frames in place of text frames in command so I use %s and line % frames to do this.
import os
import subprocess
line = "perl tilt.pl %s > final.txt"
framespdb = os.listdir("prac_frames")
for frames in framespdb:
cmd = line % frames
print(cmd)
subprocess.Popen(cmd, shell=True)
EDIT: if you need results in different files you can use again %s to add unique filename to command - for example:
import os
import subprocess
line = "perl tilt.pl %s > final-%s.txt" # second `%s`
framespdb = os.listdir("prac_frames")
for frames in framespdb:
cmd = line % (frames, frames) # second `frames`
print(cmd)
subprocess.Popen(cmd, shell=True)
EDIT: normally in shell you can send result on screen or redirect it to file > file.txt. To get text on screen and save it in file you need shell command tee and python command subprocess.check_output():
import os
import subprocess
line = "perl tilt.pl %s | tee final-%s.txt"
framespdb = os.listdir("prac_frames")
for frames in framespdb:
cmd = line % (frames, frames)
print(cmd)
output = subprocess.check_output(cmd, shell=True)
print 'output:', output
You can call a subprocess like from the shell if you override the shell parameter:
subprocess.call(arg_str, shell=True)

Wrapping bash scripts in python

I just found this great wget wrapper and I'd like to rewrite it as a python script using the subprocess module. However it turns out to be quite tricky giving me all sorts of errors.
download()
{
local url=$1
echo -n " "
wget --progress=dot $url 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
echo -ne "\b\b\b\b"
echo " DONE"
}
Then it can be called like this:
file="patch-2.6.37.gz"
echo -n "Downloading $file:"
download "http://www.kernel.org/pub/linux/kernel/v2.6/$file"
Any ideas?
Source: http://fitnr.com/showing-file-download-progress-using-wget.html
I think you're not far off. Mainly I'm wondering, why bother with running pipes into grep and sed and awk when you can do all that internally in Python?
#! /usr/bin/env python
import re
import subprocess
TARGET_FILE = "linux-2.6.0.tar.xz"
TARGET_LINK = "http://www.kernel.org/pub/linux/kernel/v2.6/%s" % TARGET_FILE
wgetExecutable = '/usr/bin/wget'
wgetParameters = ['--progress=dot', TARGET_LINK]
wgetPopen = subprocess.Popen([wgetExecutable] + wgetParameters,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(wgetPopen.stdout.readline, b''):
match = re.search(r'\d+%', line)
if match:
print '\b\b\b\b' + match.group(0),
wgetPopen.stdout.close()
wgetPopen.wait()
If you are rewriting the script in Python; you could replace wget by urllib.urlretrieve() in this case:
#!/usr/bin/env python
import os
import posixpath
import sys
import urllib
import urlparse
def url2filename(url):
"""Return basename corresponding to url.
>>> url2filename('http://example.com/path/to/file?opt=1')
'file'
"""
urlpath = urlparse.urlsplit(url).path # pylint: disable=E1103
basename = posixpath.basename(urllib.unquote(urlpath))
if os.path.basename(basename) != basename:
raise ValueError # refuse 'dir%5Cbasename.ext' on Windows
return basename
def reporthook(blocknum, blocksize, totalsize):
"""Report download progress on stderr."""
readsofar = blocknum * blocksize
if totalsize > 0:
percent = readsofar * 1e2 / totalsize
s = "\r%5.1f%% %*d / %d" % (
percent, len(str(totalsize)), readsofar, totalsize)
sys.stderr.write(s)
if readsofar >= totalsize: # near the end
sys.stderr.write("\n")
else: # total size is unknown
sys.stderr.write("read %d\n" % (readsofar,))
url = sys.argv[1]
filename = sys.argv[2] if len(sys.argv) > 2 else url2filename(url)
urllib.urlretrieve(url, filename, reporthook)
Example:
$ python download-file.py http://example.com/path/to/file
It downloads the url to a file. If the file is not given then it uses basename from the url.
You could also run wget if you need it:
#!/usr/bin/env python
import sys
from subprocess import Popen, PIPE, STDOUT
def urlretrieve(url, filename=None, width=4):
destination = ["-O", filename] if filename is not None else []
p = Popen(["wget"] + destination + ["--progress=dot", url],
stdout=PIPE, stderr=STDOUT, bufsize=1) # line-buffered (out side)
for line in iter(p.stdout.readline, b''):
if b'%' in line: # grep "%"
line = line.replace(b'.', b'') # sed -u -e "s,\.,,g"
percents = line.split(None, 2)[1].decode() # awk $2
sys.stderr.write("\b"*width + percents.rjust(width))
p.communicate() # close stdout, wait for child's exit
print("\b"*width + "DONE")
url = sys.argv[1]
filename = sys.argv[2] if len(sys.argv) > 2 else None
urlretrieve(url, filename)
I have not noticed any buffering issues with this code.
I've done something like this before. and i'd love to share my code with you:)
#!/usr/bin/python2.7
# encoding=utf-8
import sys
import os
import datetime
SHEBANG = "#!/bin/bash\n\n"
def get_cmd(editor='vim', initial_cmd=""):
from subprocess import call
from tempfile import NamedTemporaryFile
# Create the initial temporary file.
with NamedTemporaryFile(delete=False) as tf:
tfName = tf.name
tf.write(initial_cmd)
# Fire up the editor.
if call([editor, tfName], shell=False) != 0:
return None
# Editor died or was killed.
# Get the modified content.
fd = open(tfName)
res = fd.read()
fd.close()
os.remove(tfName)
return res
def main():
initial_cmd = "wget " + sys.argv[1]
cmd = get_cmd(editor='vim', initial_cmd=initial_cmd)
if len(sys.argv) > 1 and sys.argv[1] == 's':
#keep the download infomation.
t = datetime.datetime.now()
filename = "swget_%02d%02d%02d%02d%02d" %\
(t.month, t.day, t.hour, t.minute, t.second)
with open(filename, 'w') as f:
f.write(SHEBANG)
f.write(cmd)
f.close()
os.chmod(filename, 0777)
os.system(cmd)
main()
# run this script with the optional argument 's'
# copy the command to the editor, then save and quit. it will
# begin to download. if you have use the argument 's'.
# then this script will create another executable script, you
# can use that script to resume you interrupt download.( if server support)
so, basically, you just need to modify the initial_cmd's value, in your case, it's
wget --progress=dot $url 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
this script will first create a temp file, then put shell commands in it, and give it execute permissions. and finally run the temp file with commands in it.
vim download.py
#!/usr/bin/env python
import subprocess
import os
sh_cmd = r"""
download()
{
local url=$1
echo -n " "
wget --progress=dot $url 2>&1 |
grep --line-buffered "%" |
sed -u -e "s,\.,,g" |
awk '{printf("\b\b\b\b%4s", $2)}'
echo -ne "\b\b\b\b"
echo " DONE"
}
download "http://www.kernel.org/pub/linux/kernel/v2.6/$file"
"""
cmd = 'sh'
p = subprocess.Popen(cmd,
shell=True,
stdin=subprocess.PIPE,
env=os.environ
)
p.communicate(input=sh_cmd)
# or:
# p = subprocess.Popen(cmd,
# shell=True,
# stdin=subprocess.PIPE,
# env={'file':'xx'})
#
# p.communicate(input=sh_cmd)
# or:
# p = subprocess.Popen(cmd, shell=True,
# stdin=subprocess.PIPE,
# stdout=subprocess.PIPE,
# stderr=subprocess.PIPE,
# env=os.environ)
# stdout, stderr = p.communicate(input=sh_cmd)
then you can call like:
file="xxx" python dowload.py
In very simple words, considering you have script.sh file, you can execute it and print its return value, if any:
import subprocess
process = subprocess.Popen('/path/to/script.sh', shell=True, stdout=subprocess.PIPE)
process.wait()
print process.returncode

Store output of subprocess.Popen call in a string [duplicate]

This question already has answers here:
Running shell command and capturing the output
(21 answers)
Closed 1 year ago.
I'm trying to make a system call in Python and store the output to a string that I can manipulate in the Python program.
#!/usr/bin/python
import subprocess
p2 = subprocess.Popen("ntpq -p")
I've tried a few things including some of the suggestions here:
Retrieving the output of subprocess.call()
but without any luck.
In Python 2.7 or Python 3
Instead of making a Popen object directly, you can use the subprocess.check_output() function to store output of a command in a string:
from subprocess import check_output
out = check_output(["ntpq", "-p"])
In Python 2.4-2.6
Use the communicate method.
import subprocess
p = subprocess.Popen(["ntpq", "-p"], stdout=subprocess.PIPE)
out, err = p.communicate()
out is what you want.
Important note about the other answers
Note how I passed in the command. The "ntpq -p" example brings up another matter. Since Popen does not invoke the shell, you would use a list of the command and options—["ntpq", "-p"].
This worked for me for redirecting stdout (stderr can be handled similarly):
from subprocess import Popen, PIPE
pipe = Popen(path, stdout=PIPE)
text = pipe.communicate()[0]
If it doesn't work for you, please specify exactly the problem you're having.
Python 2: http://docs.python.org/2/library/subprocess.html#subprocess.Popen
from subprocess import PIPE, Popen
command = "ntpq -p"
process = Popen(command, stdout=PIPE, stderr=None, shell=True)
output = process.communicate()[0]
print output
In the Popen constructor, if shell is True, you should pass the command as a string rather than as a sequence. Otherwise, just split the command into a list:
command = ["ntpq", "-p"]
process = Popen(command, stdout=PIPE, stderr=None)
If you need to read also the standard error, into the Popen initialization, you should set stderr to PIPE or STDOUT:
command = "ntpq -p"
process = subprocess.Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
output, error = process.communicate()
NOTE: Starting from Python 2.7, you could/should take advantage of subprocess.check_output (https://docs.python.org/2/library/subprocess.html#subprocess.check_output).
Python 3: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
from subprocess import PIPE, Popen
command = "ntpq -p"
with Popen(command, stdout=PIPE, stderr=None, shell=True) as process:
output = process.communicate()[0].decode("utf-8")
print(output)
NOTE: If you're targeting only versions of Python higher or equal than 3.5, then you could/should take advantage of subprocess.run (https://docs.python.org/3/library/subprocess.html#subprocess.run).
In Python 3.7+ you can use the new capture_output= keyword argument for subprocess.run:
import subprocess
p = subprocess.run(["echo", "hello world!"], capture_output=True, text=True)
assert p.stdout == 'hello world!\n'
Assuming that pwd is just an example, this is how you can do it:
import subprocess
p = subprocess.Popen("pwd", stdout=subprocess.PIPE)
result = p.communicate()[0]
print result
See the subprocess documentation for another example and more information.
for Python 2.7+ the idiomatic answer is to use subprocess.check_output()
You should also note the handling of arguments when invoking a subprocess, as it can be a little confusing....
If args is just single command with no args of its own (or you have shell=True set), it can be a string. Otherwise it must be a list.
for example... to invoke the ls command, this is fine:
from subprocess import check_call
check_call('ls')
so is this:
from subprocess import check_call
check_call(['ls',])
however, if you want to pass some args to the shell command, you can't do this:
from subprocess import check_call
check_call('ls -al')
instead, you must pass it as a list:
from subprocess import check_call
check_call(['ls', '-al'])
the shlex.split() function can sometimes be useful to split a string into shell-like syntax before creating a subprocesses...
like this:
from subprocess import check_call
import shlex
check_call(shlex.split('ls -al'))
This works perfectly for me:
import subprocess
try:
#prints results and merges stdout and std
result = subprocess.check_output("echo %USERNAME%", stderr=subprocess.STDOUT, shell=True)
print result
#causes error and merges stdout and stderr
result = subprocess.check_output("copy testfds", stderr=subprocess.STDOUT, shell=True)
except subprocess.CalledProcessError, ex: # error code <> 0
print "--------error------"
print ex.cmd
print ex.message
print ex.returncode
print ex.output # contains stdout and stderr together
This was perfect for me.
You will get the return code, stdout and stderr in a tuple.
from subprocess import Popen, PIPE
def console(cmd):
p = Popen(cmd, shell=True, stdout=PIPE)
out, err = p.communicate()
return (p.returncode, out, err)
For Example:
result = console('ls -l')
print 'returncode: %s' % result[0]
print 'output: %s' % result[1]
print 'error: %s' % result[2]
The accepted answer is still good, just a few remarks on newer features. Since python 3.6, you can handle encoding directly in check_output, see documentation. This returns a string object now:
import subprocess
out = subprocess.check_output(["ls", "-l"], encoding="utf-8")
In python 3.7, a parameter capture_output was added to subprocess.run(), which does some of the Popen/PIPE handling for us, see the python docs :
import subprocess
p2 = subprocess.run(["ls", "-l"], capture_output=True, encoding="utf-8")
p2.stdout
I wrote a little function based on the other answers here:
def pexec(*args):
return subprocess.Popen(args, stdout=subprocess.PIPE).communicate()[0].rstrip()
Usage:
changeset = pexec('hg','id','--id')
branch = pexec('hg','id','--branch')
revnum = pexec('hg','id','--num')
print('%s : %s (%s)' % (revnum, changeset, branch))
import os
list = os.popen('pwd').read()
In this case you will only have one element in the list.
import subprocess
output = str(subprocess.Popen("ntpq -p",shell = True,stdout = subprocess.PIPE,
stderr = subprocess.STDOUT).communicate()[0])
This is one line solution
The following captures stdout and stderr of the process in a single variable. It is Python 2 and 3 compatible:
from subprocess import check_output, CalledProcessError, STDOUT
command = ["ls", "-l"]
try:
output = check_output(command, stderr=STDOUT).decode()
success = True
except CalledProcessError as e:
output = e.output.decode()
success = False
If your command is a string rather than an array, prefix this with:
import shlex
command = shlex.split(command)
Use check_output method of subprocess module
import subprocess
address = '192.168.x.x'
res = subprocess.check_output(['ping', address, '-c', '3'])
Finally parse the string
for line in res.splitlines():
Hope it helps, happy coding
For python 3.5 I put up function based on previous answer. Log may be removed, thought it's nice to have
import shlex
from subprocess import check_output, CalledProcessError, STDOUT
def cmdline(command):
log("cmdline:{}".format(command))
cmdArr = shlex.split(command)
try:
output = check_output(cmdArr, stderr=STDOUT).decode()
log("Success:{}".format(output))
except (CalledProcessError) as e:
output = e.output.decode()
log("Fail:{}".format(output))
except (Exception) as e:
output = str(e);
log("Fail:{}".format(e))
return str(output)
def log(msg):
msg = str(msg)
d_date = datetime.datetime.now()
now = str(d_date.strftime("%Y-%m-%d %H:%M:%S"))
print(now + " " + msg)
if ("LOG_FILE" in globals()):
with open(LOG_FILE, "a") as myfile:
myfile.write(now + " " + msg + "\n")

Categories

Resources