How to execute command line in a python loop? - python

I am trying to determine the best way to execute something in command line using python. I have accomplished this with subprocess.Popen() on individual files. However, I am trying to determine the best way to do this many time with numerous different files. I am not sure if I should create a batch file and then execute that in command, or if I am simply missing something in my code. Novice coder here so I apologize in advance. The script below returns a returncode of 1 when I use a loop, but a 0 when not in a loop. What is the best approach for the task at hand?
def check_output(command, console):
if console == True:
process = subprocess.Popen(command)
else:
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
output, error = process.communicate()
returncode = process.poll()
return returncode, output, error
for file in fileList.split(";"):
...code to create command string...
returncode, output, error = check_output(command, False)
if returncode != 0:
print("Process failed")
sys.exit()
EDIT: An example command string looks like this:
C:\Path\to\executable.exe -i C:\path\to\input.ext -o C:\path\to\output.ext

Try using the commands module (only available before python 3)
>>> import commands
>>> commands.getstatusoutput('ls /bin/ls')
(0, '/bin/ls')
Your code might look like this
import commands
def runCommand( command ):
ret,output = commands.getstatutoutput( command )
if ret != 0:
sys.stderr.writelines( "Error: "+output )
return ret
for file in fileList.split(';'):
commandStr = ""
# Create command string
if runCommand( commandStr ):
print("Command '%s' failed" % commandStr)
sys.exit(1)
You are not entirely clear about the problem you are trying to solve. If I had to guess why your command is failing in the loop, its probably the way you handle the console=False case.

If you are merely running commands one after another, then it is probably easiest to cast aside Python and stick your commands into a bash script. I assume you merely want to check errors and abort if one of the commands fails.
#!/bin/bash
function abortOnError(){
"$#"
if [ $? -ne 0 ]; then
echo "The command $1 failed with error code $?"
exit 1
fi
}
abortOnError ls /randomstringthatdoesnotexist
echo "Hello World" # This will never print, because we aborted
Update: OP updated his question with sample data that indicate he is on Windows.
You can get bash for Windows through cygwin or various other packages, but it may make more sense to use PowerShell if you are on Windows. Unfortunately, I do not have a Windows box, but there should be a similar mechanism for error checking. Here is a reference for PowerShell error handling.

You might consider using subprocess.call
from subprocess import call
for file_name in file_list:
call_args = 'command ' + file_name
call_args = call_args.split() # because call takes a list of strings
call(call_args)
It also will output 0 for success and 1 for failure.

What your code is trying to accomplish is to run a command on a file, and exit the script if there's an error. subprocess.check_output accomplishes this - if the subprocess exits with an error code it raises a Python error. Depending on whether you want to explicitly handle errors, your code would look something like this:
file in fileList.split(";"):
...code to create command string...
subprocess.check_output(command, shell=True)
Which will execute the command and print the shell error message if there is one, or
file in fileList.split(";"):
...code to create command string...
try:
subprocess.check_output(command,shell=True)
except subprocess.CalledProcessError:
...handle errors...
sys.exit(1)
Which will print the shell error code and exit, as in your script.

Related

Subprocess.run .vs. running command in the cmd directly

I am getting a weird "Access denied - \" error\warning when I run the following script:
import os
import subprocess
directory = r'S:\ome\directory'
subprocess.run(['find', '/i', '"error"', '*.txt', '>Errors.log'], shell=True, cwd=directory)
I have also checked:
print(os.access(directory, os.R_OK)) # prints True
print(os.access(directory, os.W_OK)) # prints True
and they both print True.
The error message is printed while the subprocess command is running but the process is not killed; nothing is raised. As a result, wrapping it into a try-except even without specifying the exception is not catching anything. When the process finishes, the file is created (Error.log) but contains the wrong results.
Running the exact same command (find /i "fatal" *.txt >Error.log) from a cmd opened in the specified directory produces the correct results.
So in which way are the two approaches different?
Approach 1 (from Python):
subprocess.run(['find', '/i', '"error"', '*.txt', '>Errors.log'], shell=True, cwd=r'S:\ome\directory')
Approach 2 (from cmd):
S:\ome\directory>find /i "error" *.txt >Errors.log
I am still not sure what exactly the problem is but changing:
subprocess.run(['find', '/i', '"error"', '*.txt', '>Errors.log'], shell=True, cwd=r'S:\ome\directory')
to:
subprocess.run('find /i "error" *.txt >Errors.log', shell=True, cwd=directory)
does the trick.
As it appears, manually stitching the command works. If anybody has more info on the matter, I would be very grateful.
From Popen constructor (which called by subprocess.run)
On Windows, if args is a sequence, it will be converted to a string in
a manner described in Converting an argument sequence to a string on
Windows. This is because the underlying CreateProcess() operates on
strings.
The problem is that CreateProcess does not support redirection.
see : How do I redirect output to a file with CreateProcess?
Try to pass an output file handler for the stdout argument:
import shlex
import subprocess
with open("Errors.log", "w") as output_fh:
# command = ['find', '/i', '"fatal"', '*.txt']
command = shlex.split(r'find /i \"fatal\" *.txt')
try:
subprocess.run(command, shell=True, stdout=output_fh)
except subprocess.CalledProcessError:
pass
Perhaps, this is the only way to implement your task, because subprocess.run doesn't run redirections (> or >>) described by its arguments.

python subprocess.Popen stdin.write

I'm new to python and would like to open a windows cmd prompt, start a process, leave the process running and then issue commands to the same running process.
The commands will change so i cant just include these commands in the cmdline variable below. Also, the process takes 10-15 seconds to start so i dont want to waste time waiting for the process to start and run commands each time. just want to start process once. and run quick commands as needed in the same process
I was hoping to use subprocess.Popen to make this work, though i am open to better methods. Note that my process to run is not cmd, but im just using this as example
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi' readback
cmd.stdin.write("echo hi again") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi again' readback
The results arent what i expect. Seems as though the stdin.write commands arent actually getting in and the readline freezes up with nothing to read.
I have tried the popen.communicate() instead of write/readline, but it kills the process. I have tried setting bufsize in the Popen line, but that didn't make too much difference
Your comments suggest that you are confusing command-line arguments with input via stdin. Namely, the fact that system-console.exe program accepts script=filename parameter does not imply that you can send it the same string as a command via stdin e.g., python executable accepts -c "print(1)" command-line arguments but it is a SyntaxError if you pass it as a command to Python shell.
Therefore, the first step is to use the correct syntax. Suppose the system-console.exe accepts a filename by itself:
#!/usr/bin/env python3
import time
from subprocess import Popen, PIPE
with Popen(r'C:\full\path\to\system-console.exe -cli -',
stdin=PIPE, bufsize=1, universal_newlines=True) as shell:
for _ in range(10):
print('capture.tcl', file=shell.stdin, flush=True)
time.sleep(5)
Note: if you've redirected more than one stream e.g., stdin, stdout then you should read/write both streams concurrently (e.g., using multiple threads) otherwise it is very easy to deadlock your program.
Related:
Q: Why not just use a pipe (popen())? -- mandatory reading for Unix environment but it might also be applicable for some programs on Windows
subprocess readline hangs waiting for EOF -- code example on how to pass multiple inputs, read multiple outputs using subprocess, pexpect modules.
The second and the following steps might have to deal with buffering issues on the side of the child process (out of your hands on Windows), whether system-console allows to redirect its stdin/stdout or whether it works with a console directly, and character encoding issues (how various commands in the pipeline encode text).
Here is some code that I tested and is working on Windows 10, Quartus Prime 15.1 and Python 3.5
import subprocess
class altera_system_console:
def __init__(self):
sc_path = r'C:\altera_lite\15.1\quartus\sopc_builder\bin\system-console.exe --cli --disable_readline'
self.console = subprocess.Popen(sc_path, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
def read_output(self):
rtn = ""
loop = True
i = 0
match = '% '
while loop:
out = self.console.stdout.read1(1)
if bytes(match[i],'utf-8') == out:
i = i+1
if i==len(match):
loop=False
else:
rtn = rtn + out.decode('utf-8')
return rtn
def cmd(self,cmd_string):
self.console.stdin.write(bytes(cmd_string+'\n','utf-8'))
self.console.stdin.flush()
c = altera_system_console()
print(c.read_output())
c.cmd('set jtag_master [lindex [get_service_paths master] 0]')
print(c.read_output())
c.cmd('open_service master $jtag_master')
print(c.read_output())
c.cmd('master_write_8 $jtag_master 0x00 0xFF')
print(c.read_output())
You need to use iter if you want to see the output in real time:
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi\n")#would like this to be written to the cmd prompt
for line in iter(cmd.stdout.readline,""):
print line
cmd.stdin.write("echo hi again\n")#would like this to be written to the cmd prompt
Not sure exactly what you are trying to do but if you want to input certain data when you get certain output then I would recommend using pexpect

Python call makefile compiling and capture make's output

A job in Jenkins calls my python script, in which I want to call make to compile my UT codes, and raise sys.exit(1) if a compiling error like "make: *** [ ] Error 1" occurs.
I also need to print output in real time.
Here's my python script:
make_process = subprocess.Popen("make clean all;", shell=True, stdout=subprocess.PIPE, stderr=sys.stdout.fileno())
while True:
line = make_process.stdout.readline()
if not line:break
print line, #output to console in time
sys.stdout.flush()
But how do I capture the make error and let this python script fail?
Note:
make_process.wait() is 0 when make error happens.
this answer doesn't work for me:
Running shell command from Python and capturing the output
Update:
It turned out to be a makefile issue. See the comments on the accepted answer. But for this python question, pentadecagon gave the right answer.
You can check the return value of the make process by
make_process.poll()
This returns "None" if the process is still running, or the error code if it's finished. If you just want the output to end up on the console there is no need to do this manually
The output goes to the console anyway, and can do it like this:
make_process = subprocess.Popen("make clean all", stderr=subprocess.STDOUT)
if make_process.wait() != 0:
something_went_wrong();

how can I silence all of the output from a particular python command?

Autodesk Maya 2012 provides "mayapy" - a modded build of python filled with the necessary packages to load Maya files and act as a headless 3D editor for batch work. I'm calling it from a bash script. If that script opens a scene file in it with cmds.file(filepath, open=True), it spews pages of warnings, errors, and other info I don't want. I want to turn all of that off only while the cmds.file command is running.
I've tried redirecting from inside of the Python commands I'm sending into mayapy inside the shell script, but that doesn't work. I can silence everything by redirecting stdout/err to /dev/null in the call to the bash script. Is there any way to silence it in the call to the shell, but still allow my passed-in command inside the script to print out information?
test.sh:
#!/bin/bash
/usr/autodesk/maya/bin/mayapy -c "
cmds.file('filepath', open=True);
print 'hello'
"
calling it:
$ ./test.sh # spews info, then prints 'hello'
$ ./test.sh > /dev/null 2>&1 # completely silent
Basically, I think the best way to solve this is to implement a wrapper that will execute test.sh and sanitize the output to the shell. To sanitize the output, I would simply prepend some string to notify your wrapper that this text is good for output. My inspiration for the wrapper file came from this: https://stackoverflow.com/a/4760274/2030274
The contents are as follows:
import subprocess
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while(True):
retcode = p.poll() #returns None while subprocess is running
line = p.stdout.readline()
yield line
if(retcode is not None):
break
for line in runProcess(['./test.sh']):
if line.startswith('GARYFIXLER:'):
print line,
Now you could imagine test.sh being something along the lines of
#!/bin/bash
/usr/autodesk/maya/bin/mayapy -c "
cmds.file('filepath', open=True);
print 'GARYFIXLER:hello'
"
and this will only print the hello line. Since we are wrapping the python call in a subprocess, all output typically displayed to the shell should get captured and you should intercept the lines that you don't want.
Of course, to call test.sh from a python script, you need to make sure you have the correct permissions.
I knew I was just getting twisted around with pipes. Maya is indeed sending all batch output to stderror. This frees stdout entirely once you properly pipe stderr away. Here's an all-bash one-liner that works.
# load file in batch; divert Maya's output to /dev/null
# then print listing of things in file with cmds.ls()
/usr/autodesk/maya/bin/mayapy -c "import maya.standalone;maya.standalone.initialize(name='python');cmds.file('mayafile.ma', open=True);print cmds.ls()" 2>/dev/null

Getting stdout to display called script containing input

I was looking to implement a python script that called another script and captured its stdout. The called script will contain some input and output messages eg
print ("Line 1 of Text")
variable = raw_input("Input 1 :")
print "Line 2 of Text Input: ", vairable
The section of the code I'm running is
import subprocess
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
so, se = p.communicate()
print(so)
The problem that is occurring is that the stdout is not printing until after the script has been executed. This leaves a blank prompt waiting for the user input. Is there a way to get stdout to print while the called script is still running?
Thanks,
There are two problems here.
Firstly, python is buffering output to stdout and you need to prevent this. You could insert a call to sys.stdout.flush() in testfile.py as Ilia Frenkel has suggested, or you could use python -u to execute testfile.py with unbuffered I/O. (See the other stack overflow question that Ilia linked to.)
You need a way of asynchronously reading data from the sub-process and then, when it is ready for input, printing the data you've read so that the prompt for the user appears. For this, it would be very helpful to have an asynchronous version of the subprocess module.
I downloaded the asynchronous subprocess and re-wrote your script to use it, along with using python -u to get unbuffered I/O:
import async_subprocess as subprocess
cmd = ['python', '-u', 'testfile.py']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
so = p.asyncread()
print so,
(so, se) = p.communicate()
print so
When I run this script using python -u I get the following results:
$ python -u script.py
Line 1 of Text
Input 1:
and the script pauses, waiting for input. This is the desired result.
If I then type something (e.g. "Hullo") I get the following:
$ python -u script.py
Line 1 of Text
Input 1:Hullo
Line 2 of Text Input: Hullo
You don't need to capture it's stdout really, just have the child program print out its stuff and quit, instead of feeding the output into your parent program and printing it there. If you need variable output, just use a function instead.
But anyways, that's not what you asked.
I actually got this from another stackoverflow question:
import subprocess, sys
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
out = p.stdout.read(20)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
First, it opens up your process: then it continually reads the output from p and prints it onto the screen using sys.stdout.write. The part that makes this all work is sys.stdout.flush(), which will continually "flush out" the output of the program.

Categories

Resources