Is there a way in Powershell to redirect the print statements from python one place and the return statements to another place?
For instance i'm trying
$ python myPythonScript.py args >> logFile
I get print statement output in my log file (though it looks awful, cleaning will be the next job)
However, i do not get the return statement values from the python in the output. Nor do i catch them if I use *>>
Any ideas what's happening?
Python Example:
def main(args)
print "This goes to file"
#parse flags...
#do something here...
MyVar = "Please " + "Work"
return MyVar #doesn't go anywhere
if(__name__ == '__main__':
main(sys.argv)
The return value of a program (or exit code) can only be an integer, not a string.
First, ensure you return an int, or do exit(n) where n is an int.
You might want to also fix:
if(__name__ == '__main__'):
return main(sys.argv)
You can then access the return value of your program (script) in Powershell with echo %errorlevel%
If you really want a string to be used by powershell, you should:
print your logs on stderr instead of stdout
print the filename on stdout at end of execution
redirects stderr to your logfile and stdout to a powershell pipe | if you want a redirection - or you can execute the python script within parentheses $() so the result can be used on command line
Related
I have a shell script TestNode.sh. This script has content like this:
port_up=$(python TestPorts.py)
python TestRPMs.py
Now, I want to capture the value returned by these scripts.
TestPorts.py
def CheckPorts():
if PortWorking(8080):
print "8080 working"
return "8080"
elif PortWorking(9090):
print "9090 working"
return "9090"
But as I checked the answers available, they are not working for me. The print is pushing the value in variable port_up, but I wanted that print should print on the console and the variable port_up should get the value from return statement. Is there a way to achieve this?
Note: I don't wish to use sys.exit(). Is it possible to achieve the same without this?
but I wanted that print should print on the console and the variable port_up should get the value from return statement.
Then don't capture the output. Instead do:
python TestPorts.py
port_up=$? # return value of the last statement
python TestRPMs.py
You could do:
def CheckPorts():
if PortWorking(8080):
sys.stderr.write("8080 working")
print 8080
But then I am not very happy to print "output" to stderr either.
Alternatively, you could skip printing that "8080 working" message in python script and print it from the shell script.
def CheckPorts():
if PortWorking(8080):
return "8080"
and:
port_up=$(python TestPorts.py)
echo "$port_up working"
python TestRPMs.py
To return an exit code from a Python script you can use sys.exit(); exit() may also work. In the Bash (and similar) shell, the exit code of the previous command can be found in $?.
However, the Linux shell exit codes are 8 bit unsigned integers, i.e. in the range 0-255, as mentioned in this answer. So your strategy isn't going to work.
Perhaps you can print "8080 working" to stderr or a logfile and print "8080" to stdout so you can capture it with $().
I'm trying to run a command from a python script, but I want to store the output the command produces and then check it for a substring, however it seems it is not being stored in my variable because it still prints to the screen.
So far I have this...
myfile = 'filename.txt'
result = subprocess.Popen(['myprogram.exe', '-f' + myfile], stdout=subprocess.PIPE).communicate()[0]
if result.find("error executing") != -1:
print "error!"
else:
print "success!"
I'm rather new to Python. Can anyone shed some light on WHY when I run this script, the myprogram.exe DOES execute, but it's output is still sent to the screen. If I print the result variable, it DOES have additional output from myprogram.exe, but I need the lines that show the error too.
You're only redirecting stdout. Looks like your program outputs errors to stderr (as expected), add stderr=subprocess.PIPE to the Popen call.
I am trying to determine the best way to execute something in command line using python. I have accomplished this with subprocess.Popen() on individual files. However, I am trying to determine the best way to do this many time with numerous different files. I am not sure if I should create a batch file and then execute that in command, or if I am simply missing something in my code. Novice coder here so I apologize in advance. The script below returns a returncode of 1 when I use a loop, but a 0 when not in a loop. What is the best approach for the task at hand?
def check_output(command, console):
if console == True:
process = subprocess.Popen(command)
else:
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
output, error = process.communicate()
returncode = process.poll()
return returncode, output, error
for file in fileList.split(";"):
...code to create command string...
returncode, output, error = check_output(command, False)
if returncode != 0:
print("Process failed")
sys.exit()
EDIT: An example command string looks like this:
C:\Path\to\executable.exe -i C:\path\to\input.ext -o C:\path\to\output.ext
Try using the commands module (only available before python 3)
>>> import commands
>>> commands.getstatusoutput('ls /bin/ls')
(0, '/bin/ls')
Your code might look like this
import commands
def runCommand( command ):
ret,output = commands.getstatutoutput( command )
if ret != 0:
sys.stderr.writelines( "Error: "+output )
return ret
for file in fileList.split(';'):
commandStr = ""
# Create command string
if runCommand( commandStr ):
print("Command '%s' failed" % commandStr)
sys.exit(1)
You are not entirely clear about the problem you are trying to solve. If I had to guess why your command is failing in the loop, its probably the way you handle the console=False case.
If you are merely running commands one after another, then it is probably easiest to cast aside Python and stick your commands into a bash script. I assume you merely want to check errors and abort if one of the commands fails.
#!/bin/bash
function abortOnError(){
"$#"
if [ $? -ne 0 ]; then
echo "The command $1 failed with error code $?"
exit 1
fi
}
abortOnError ls /randomstringthatdoesnotexist
echo "Hello World" # This will never print, because we aborted
Update: OP updated his question with sample data that indicate he is on Windows.
You can get bash for Windows through cygwin or various other packages, but it may make more sense to use PowerShell if you are on Windows. Unfortunately, I do not have a Windows box, but there should be a similar mechanism for error checking. Here is a reference for PowerShell error handling.
You might consider using subprocess.call
from subprocess import call
for file_name in file_list:
call_args = 'command ' + file_name
call_args = call_args.split() # because call takes a list of strings
call(call_args)
It also will output 0 for success and 1 for failure.
What your code is trying to accomplish is to run a command on a file, and exit the script if there's an error. subprocess.check_output accomplishes this - if the subprocess exits with an error code it raises a Python error. Depending on whether you want to explicitly handle errors, your code would look something like this:
file in fileList.split(";"):
...code to create command string...
subprocess.check_output(command, shell=True)
Which will execute the command and print the shell error message if there is one, or
file in fileList.split(";"):
...code to create command string...
try:
subprocess.check_output(command,shell=True)
except subprocess.CalledProcessError:
...handle errors...
sys.exit(1)
Which will print the shell error code and exit, as in your script.
I want to execute a python script from a bash script, and I want to store the output of the python script in a variable.
In my python script, I print some stuff to screen and at the end I return a string with:
sys.exit(myString)
In my bash script, I did the following:
outputString=`python myPythonScript arg1 arg2 arg3 `
But then when I check the value of outputString with echo $outputString I get everything that the Python script had printed to screen, but not the return value myString!
How should I do this?
EDIT: I need the string because that tells me where a file created by the Python script is located. I want to do something like:
fileLocation=`python myPythonScript1 arg1 arg2 arg1`
python myPythonScript2 $fileLocation
sys.exit(myString) doesn't mean "return this string". If you pass a string to sys.exit, sys.exit will consider that string to be an error message, and it will write that string to stderr. The closest concept to a return value for an entire program is its exit status, which must be an integer.
If you want to capture output written to stderr, you can do something like
python yourscript 2> return_file
You could do something like that in your bash script
output=$((your command here) 2> &1)
This is not guaranteed to capture only the value passed to sys.exit, though. Anything else written to stderr will also be captured, which might include logging output or stack traces.
example:
test.py
print "something"
exit('ohoh')
t.sh
va=$(python test.py 2>&1)
mkdir $va
bash t.sh
edit
Not sure why but in that case, I would write a main script and two other scripts... Mixing python and bash is pointless unless you really need to.
import script1
import script2
if __name__ == '__main__':
filename = script1.run(sys.args)
script2.run(filename)
sys.exit() should return an integer, not a string:
sys.exit(1)
The value 1 is in $?.
$ cat e.py
import sys
sys.exit(1)
$ python e.py
$ echo $?
1
Edit:
If you want to write to stderr, use sys.stderr.
Do not use sys.exit like this. When called with a string argument, the exit code of your process will be 1, signaling an error condition. The string is printed to standard error to indicate what the error might be. sys.exit is not to be used to provide a "return value" for your script.
Instead, you should simply print the "return value" to standard output using a print statement, then call sys.exit(0), and capture the output in the shell.
read it in the docs.
If you return anything but an int or None it will be printed to stderr.
To get just stderr while discarding stdout do:
output=$(python foo.py 2>&1 >/dev/null)
In addition to what Tichodroma said, you might end up using this syntax:
outputString=$(python myPythonScript arg1 arg2 arg3)
Python documentation for sys.exit([arg])says:
The optional argument arg can be an integer giving the exit status (defaulting to zero), or another type of object. If it is an integer, zero is considered “successful termination” and any nonzero value is considered “abnormal termination” by shells and the like. Most systems require it to be in the range 0-127, and produce undefined results otherwise.
Moreover to retrieve the return value of the last executed program you could use the $? bash predefined variable.
Anyway if you put a string as arg in sys.exit() it should be printed at the end of your program output in a separate line, so that you can retrieve it just with a little bit of parsing. As an example consider this:
outputString=`python myPythonScript arg1 arg2 arg3 | tail -0`
Currently, I am using the following command to do this
$ python scriptName.py <filePath
This command uses "<" to stdin the file to script.
and it works fine, I can use sys.stdin.read to get the file data.
But, what if I want to pass file data as a string,
I don't want to pass file path in operator "<".
Is there is anyway, where I can pass String as stdin to a python script?
Thanks,
Kamal
The way I read your question, you currently have some file abc.txt with content
Input to my program
And you execute it this way:
python scriptName.py <abc.txt
Now you no longer want to go by way of this file, and instead type the input as part of the command, while still reading from stdin. Working on the windows command line you may do it like this:
echo Input to my program | python scriptName.py
while on Linux/Mac you'd better quote it to avoid shell expansion:
echo "Input to my program" | python scriptName.py
This only works for single-line input on windows (AFAIK), while on linux (and probably Mac) you can use the -e switch to insert newlines:
echo -e "first line\nsecond line" | python scriptName.py
There is raw_input which you can use make the program prompt for input and you can send in a string. And yes, it is mentioned in the first few pages of the tutorial at http://www.python.org.
>>> x = raw_input()
Something # you type
>>> x
'Something'
And sending the input via < the shell redirection operation is the property of shell and not python.
I could be wrong, but the way that I read the OP's question, I think he may currently be calling an os command to run a shell script inside of his python script, and then using a < operator to pass a file's contents into this shell script, and he is just hard coding the < and filename.
What he really desires to do is a more dynamic approach where he can pass a string defined in Python to this shell script.
If this is the case, the method I would suggest is this:
import subprocess;
script_child = subprocess.Popen(['/path/to/script/myScript.sh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
stdout, stderr = clone_child.communicate("String to pass to the script.")
print "Stdout: ", stdout
print "Stderr: ", stderr
Alternatively, you can pass arguments to the script in the initial Popen like so:
script_child = subprocess.Popen(['/path/to/script/myScript.sh', '-v', 'value', '-fs'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)