I have written a C code where I have converted one file format to another file format. To run my C code, I have taken one command line argument : filestem.
I executed that code using : ./executable_file filestem > outputfile
Where I have got my desired output inside outputfile
Now I want to take that executable and run within a python code.
I am trying like :
import subprocess
import sys
filestem = sys.argv[1];
subprocess.run(['/home/dev/executable_file', filestem , 'outputfile'])
But it is unable to create the outputfile. I think some thing should be added to solve the > issue. But unable to figure out. Please help.
subprocess.run has optional stdout argument, you might give it file handle, so in your case something like
import subprocess
import sys
filestem = sys.argv[1]
with open('outputfile','wb') as f:
subprocess.run(['/home/dev/executable_file', filestem],stdout=f)
should work. I do not have ability to test it so please run it and write if it does work as intended
You have several options:
NOTE - Tested in CentOS 7, using Python 2.7
1. Try pexpect:
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import pexpect
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
command_output, exitstatus = pexpect.run("/usr/bin/bash -c '{0}'".format(cmd), withexitstatus=True)
if exitstatus == 0:
print(command_output)
else:
print("Houston, we've had a problem.")
2. Run subprocess with shell=true (Not recommended):
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import sys
import subprocess
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
result = subprocess.check_output(shlex.split(cmd), shell=True) # or subprocess.call(cmd, shell=True)
print(result)
It works, but python.org frowns upon this, due to the chance of a shell injection: see "Security Considerations" in the subprocess documentation.
3. If you must use subprocess, run each command separately and take the SDTOUT of the previous command and pipe it into the STDIN of the next command:
p = subprocess.Popen(cmd, stdin=PIPE, stdout=PIPE)
stdout_data, stderr_data = p.communicate()
p = subprocess.Popen(cmd, stdin=stdout_data, stdout=PIPE)
etc...
Good luck with your code!
Related
I have a cmd file "file.cmd" containing 100s of lines of command.
Example
pandoc --extract-media -f docx -t gfm "sample1.docx" -o "sample1.md"
pandoc --extract-media -f docx -t gfm "sample2.docx" -o "sample2.md"
pandoc --extract-media -f docx -t gfm "sample3.docx" -o "sample3.md"
I am trying to run these commands using a script so that I don't have to go to a file and click on it.
This is my code, and it results in no output:
file1 = open('example.cmd', 'r')
Lines = file1.readlines()
# print(Lines)
for i in Lines:
print(i)
os.system(i)
You don't need to read the cmd file line by line. you can simply try the following:
import os
os.system('myfile.cmd')
or using the subprocess module:
import subprocess
p = subprocess.Popen(['myfile.cmd'], shell = True, close_fds = True)
stdout, stderr = proc.communicate()
Example:
myfile.cmd:
#ECHO OFF
ECHO Grettings From Python!
PAUSE
script.py:
import os
os.system('myfile.cmd')
The cmd will open with:
Greetings From Python!
Press any key to continue ...
You can debug the issue by knowing the return exit code by:
import os
return_code=os.system('myfile.cmd')
assert return_code == 0 #asserts that the return code is 0 indicating success!
Note: os.system works by calling system() in C can only take up to 65533 arguments after a command (so it is a 16 bit issue). Giving one more argument will result in the return code 32512 (which implies the exit code 127).
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function (os.system('command')).
since it is a command file (cmd), and only the shell can run it, then shell argument must set to be true. since you are setting the shell argument to true, the command needs to be string form and not a list.
use the Popen method for spawn a new process and the communicte for waiting on that process (you can time it out as well). if you whish to communicate with the child process, provide the PIPES (see mu example, but you dont have to!)
the code below for python 3.3 and beyond
import subprocess
try:
proc=subprocess.Popen('myfile.cmd', shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
outs, errs = proc.communicate(timeout=15) #timing out the execution, just if you want, you dont have to!
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
for older python versions
proc = subprocess.Popen('myfile.cmd', shell=True)
t=10
while proc.poll() is None and t >= 0:
print('Still waiting')
time.sleep(1)
t -= 1
proc.kill()
In both cases (python versions) if you dont need the timeout feature and you dont need to interact with the child process, then just, use:
proc = subprocess.Popen('myfile.cmd', shell=True)
proc.communicate()
I want to give cmd automated input command here is my code
import subprocess
from subprocess import Popen, PIPE
p = subprocess.call("cmd",shell=True)
p = Popen('cmd', stdin=PIPE) # NOTE: no shell=True here
p.communicate(os.linesep.join(["apktool d aalpha.apk"]))
This opens cmd for me in the project directory i.e E:\myproject. Now I have this apktool in my project directory I am trying to run it automatically providing it the apktool run command in a way that I just open my python file and it executes the apktool.
Are you looking for something like this:
import subprocess;
commandA = 'start <path\file.png>';
p = subprocess.Popen(commandA, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT);
I'm trying to execute a perl script within another python script. My code is as below:
command = "/path/to/perl/script/" + "script.pl"
input = "< " + "/path/to/file1/" + sys.argv[1] + " >"
output = "/path/to/file2/" + sys.argv[1]
subprocess.Popen(["perl", command, "/path/to/file1/", input, output])
When execute the python script, it returned:
No info key.
All path leading to the perl script as well as files are correct.
My perl script is executed with command:
perl script.pl /path/to/file1/ < input > output
Any advice on this is much appreciate.
The analog of the shell command:
#!/usr/bin/env python
from subprocess import check_call
check_call("perl script.pl /path/to/file1/ < input > output", shell=True)
is:
#!/usr/bin/env python
from subprocess import check_call
with open('input', 'rb', 0) as input_file, \
open('output', 'wb', 0) as output_file:
check_call(["perl", "script.pl", "/path/to/file1/"],
stdin=input_file, stdout=output_file)
To avoid the verbose code, you could use plumbum to emulate a shell pipeline:
#!/usr/bin/env python
from plumbum.cmd import perl $ pip install plumbum
((perl["script.pl", "/path/to/file1"] < "input") > "output")()
Note: Only the code example with shell=True runs the shell. The 2nd and 3rd examples do not use shell.
I have a Python script that needs to interact with the user via the command line, while logging whatever is output.
I currently have this:
# lots of code
popen = subprocess.Popen(
args,
shell=True,
stdin=sys.stdin,
stdout=sys.stdout,
stderr=sys.stdout,
executable='/bin/bash')
popen.communicate()
# more code
This executes a shell command (e.g. adduser newuser02) just as it would when typing it into a terminal, including interactive behavior. This is good.
Now, I want to log, from within the Python script, everything that appears on the screen. But I can't seem to make that part work.
I've tried various ways of using subprocess.PIPE, but this usually messes up the interactivity, like not outputting prompt strings.
I've also tried various ways to directly change the behavior of sys.stdout, but as subprocess writes to sys.stdout.fileno() directly, this was all to no avail.
Popen might not be very suitable for interactive programs due to buffering issues and due to the fact that some programs write/read directly from a terminal e.g., to retrieve a password. See Q: Why not just use a pipe (popen())?.
If you want to emulate script utility then you could use pty.spawn(), see the code example in Duplicating terminal output from a Python subprocess or in log syntax errors and uncaught exceptions for a python subprocess and print them to the terminal:
#!/usr/bin/env python
import os
import pty
import sys
with open('log', 'ab') as file:
def read(fd):
data = os.read(fd, 1024)
file.write(data)
file.flush()
return data
pty.spawn([sys.executable, "test.py"], read)
Or you could use pexpect for more flexibility:
import sys
import pexpect # $ pip install pexpect
with open('log', 'ab') as fout:
p = pexpect.spawn("python test.py")
p.logfile = fout # or .logfile_read
p.interact()
If your child process doesn't buffer its output (or it doesn't interfere with the interactivity) and it prints its output to its stdout or stderr then you could try subprocess:
#!/usr/bin/env python
import sys
from subprocess import Popen, PIPE, STDOUT
with open('log','ab') as file:
p = Popen([sys.executable, '-u', 'test.py'],
stdout=PIPE, stderr=STDOUT,
close_fds=True,
bufsize=0)
for c in iter(lambda: p.stdout.read(1), ''):
for f in [sys.stdout, file]:
f.write(c)
f.flush()
p.stdout.close()
rc = p.wait()
To read both stdout/stderr separately, you could use teed_call() from Python subprocess get children's output to file and terminal?
This should work
import subprocess
f = open('file.txt','w')
cmd = ['echo','hello','world']
subprocess.call(cmd, stdout=f)
I want to make a python script that will convert a TEX file to PDF and then open the output file with my document viewer.
I first tried the following:
import subprocess
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
This way, the conversion from TEX to PDF works all right, but, as it takes some time, the second command (open file with Document Viewer) is executed before the output file is created.
So, I tried do make the program wait some seconds before executing the second command. Here's what I've done:
import subprocess
import time
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(10)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
But, when I do so, the output PDF file is not created. I can't understand why. The only change was the time.sleep command. Why does it affect the Popen process?
Could anyone give me some help?
EDIT:
I've followed the advice from Faust and Paulo Bu and in both cases the result is the same.
When I run this command...
subprocess.call('xelatex --output-directory=Alunos/ Alunos/{}_pratica.tex'.format(aluno), shell=True)
... or this...
p = subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
...the Xelatex program is run but doesn't make the conversion.
Strangely, when I run the command directly in the shell...
$ xelatex --output-directory=Alunos/ Alunos/name_pratica.tex
... the conversion works perfectly.
Here's what I get when I run the subprocess.call() command:
$ python my_file.py
Enter name:
name
This is XeTeX, Version 3.1415926-2.4-0.9998 (TeX Live 2012/Debian)
restricted \write18 enabled.
entering extended mode
(./Alunos/name_pratica.tex
LaTeX2e <2011/06/27>
Babel <v3.8m> and hyphenation patterns for english, dumylang, nohyphenation, loaded.
)
*
When I write the command directly in the shell, the output is the same, but it followed automatically by the conversion.
Does anyone know why it happens this way?
PS: sorry for the bad formating. I don't know how to post the shell output properly.
If you need to wait the termination of the program and you are not interested in its output you should use subprocess.call
import subprocess
subprocess.call(['xelatex', '--output-directory=Alunos/', 'Alunos/{}_pratica.tex'.format(aluno)])
subprocess.call([('gnome-open', 'Alunos/{}_pratica.pdf'.format(aluno)])
EDIT:
Also it is generally a good thing to use English when you have to name variables or functions.
If xelatex command works in a shell but fails when you call it from Python then xelatex might be blocked on output in your Python code. You do not read the pipes despite setting stdout/stderr to PIPE. On my machine the pipe buffer is 64KB therefore if xelatex output size is less then it should not block.
You could redirect the output to os.devnull instead:
import os
import webbrowser
from subprocess import STDOUT, check_call
try:
from subprocess import DEVNULL # py3k
except ImportError:
DEVNULL = open(os.devnull, 'w+b')
basename = aluno + '_pratica'
output_dir = 'Alunos'
root = os.path.join(output_dir, basename)
check_call(['xelatex', '--output-directory', output_dir, root+'.tex'],
stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
webbrowser.open(root+'.pdf')
check_call is used to wait for xelatex and raise an exception on error.