print statements only appearing before or after a subprocess call - python

This question is related to Python: why print statements and subprocess.call() output are out of sync? but the solutions are not working for my particular situation. I am trying to accomplish an automation test process in python that has first prints the name of a Test Suite, then has a for loop which iterate through a list printing the name of a particular test, followed by a subprocess call to execute the test, ended with a print statement that the test is finished. Lastly, I print a statement that says it is the end of the test suite name.
i.e.
=========BEGIN TEST SUITE: =========
---------start test: ----------
subprocess call to execute test
---------end test: ------------
repeat for tests
=========END TEST SUITE: ==========
My current code works fine but when I redirect its output to a file, it puts all the print statements at the bottom. When using the solutions in the other question, it does the complete opposite for me and prints all the print statements first, then executes the test. I tried used the sys.stdout.flush(), as well as turning off the buffering but both give me the same thing. How can I get everything to be printed exactly the way it executes/is in the code when I redirect output or write to a file??

The way I generally solve problems like this is to capture the output in the python program and print it when I want it to print using a Popen
>>> def do_ls():
... print "I'm about to ls"
... ls = subprocess.Popen('ls', stdout=subprocess.PIPE)
... output = ls.communicate()[0] #wait for the process to finish, capture stdout
... print 'ls output:'
... print output
... print 'done.'
I can show you how to do this more specifically for your tests if you post some code

Related

Pytest print to stdout despite of output capturing

I am aware that PyTest captures the output (stdout, stderr, ...) for the tests it executes, and that is an awesome feature I want to keep. However, there is some content that I wish to print to the console immediately from my within conftest.py file, as general information to the person watching the test execution from their terminal. Using a print statement there does not work, as the output of the conftest.py file seems to also be captured, and is only shown if an error happens while executing that file.
Is there a way for me to explicitly bypass this "PyTest output capturing" for a single print statement?
On Unix you could open the controlling terminal and print to it directly:
import os
termfd = open(os.ctermid(), "w")
termfd.write("hello")
termfd.close()

Any way to stop a command from returning exit codes? [duplicate]

I've got a Python script that uses os.system to run shell commands. The output from those commands is echoed to the screen; I like this and need to keep it. I would also like my script to be able to take action based on the contents of the output from the system call. How can I do this?
In my specific case, I'm calling os.system("svn update"). I need the output to go to the screen and (in case of conflicts, for example), the user needs to be able to interact with svn. I would like to be able to have the script take action based on the output - to trigger a build if it sees that a build script was updated, for example.
I'd prefer not to handle the I/O myself (that would seem unnecessarily complex) and I'd rather not send the output to a temporary file that I have to clean up later (though I will if I must).
Edit:
Here's my test script:
#!/usr/bin/python -t
import subprocess
output = subprocess.check_output(["echo","one"])
print "python:", output
output = subprocess.check_output(["echo", "two"], shell=True)
print "python:", output
output = subprocess.check_output("echo three", shell=True)
print "python:", output
and here's its output:
$ ./pytest
python: one
python:
python: three
(There's an extra blank line at the end that the code block doesn't show.) I expect something more like:
$ ./pytest
one
python: one
two
python:
three
python: three
To run a process, I would look into subprocess.check_output. In this case, something like:
output = subprocess.check_output(['svn','update'])
print output
This only works on python2.7 or newer though. If you want a version which works with older versions of python:
p = subprocess.Popen(['svn','update'],stdout=subprocess.PIPE)
output,stderr = p.communicate()
print output

Get previous console output as string in script

I have a script that returns an output in the console, eg (not the actual code just an example):
print("Hello World")
I want to be able to catch this output as a string and store it as a variable:
print("Hello World")
# function to catch previous line output and store it as a variable
I'm assuming by the wording in your question that you are running the first print command in a different script than the first one. In that case you can run it using the subprocess module and catch the output like this:
from subprocess import run
result = run(['script.py'], capture_output=True)
previous_output = result.stdout
You can just do that
a = "Hello World !"
print(a)
it's easier than trying to capture it after printing the actual string, but if you insist,
#Blupper already answered your question.

getting python script to print to terminal without returning as part of stdout

I'm trying to write a python script that returns a value which I can then pass in to a bash script. Thing is that I want a singe value returned in bash, but I want a few things printed to the terminal along the way.
Here is an example script. Let's call it return5.py:
#! /usr/bin/env python
print "hi"
sys.stdout.write(str(5))
what I want is to have this perform this way when I run it from the command line:
~:five=`./return5.py`
hi
~:echo $five
5
but what I get is:
~:five=`./return5.py`
~:echo $five
hi 5
In other words I don't know how to have a python script print and clear the stdout, then assign it to the specific value I want.
Not sure why #yorodm suggests not to use stderr. That's the best option I can think of in this case.
Notice that print will add a newline automatically, but when you use sys.stderr.write, you need to include one yourself with a "\n".
#! /usr/bin/env python
import sys
sys.stderr.write("This is an important message,")
sys.stderr.write(" but I dont want it to be considered")
sys.stderr.write(" part of the output. \n")
sys.stderr.write("It will be printed to the screen.\n")
# The following will be output.
print 5
Using this script looks like this:
bash$ five=`./return5.py`
This is an important message, but I dont want it to be considered part of the output.
It will be printed to the screen.
bash$ echo $five
5
This works because the terminal is really showing you three streams of information : stdout, stdin and stderr. The `cmd` syntax says "capture the stdout from this process", but it doesn't affect what happens to stderr. This was designed exactly for the purpose you're using it for -- communicating information about errors, warnings or what's going on inside the process.
You may not have realized that stdin is also displayed in the terminal, because it's just what shows up when you type. But it wouldn't have to be that way. You could imagine typing into the terminal and having nothing show up. In fact, this is exactly what happens when you type in a password. You're still sending data to stdin, but the terminal is not displaying it.
from my comment..
#!/usr/bin/env python
#foo.py
import sys
print "hi"
sys.exit(5)
then the output
[~] ./foo.py
hi
[~] FIVE=$?
[~] echo $FIVE
5
You can use stdout to output your messages and stderr to capture the values in bash. Unfortunately this is some weird behaviour as stderr is intended for programs to communicate error messages so I strongly advice you against it.
OTOH you can always process your script output in bash

Print the output of os.system to the screen and take actions based upon it

I've got a Python script that uses os.system to run shell commands. The output from those commands is echoed to the screen; I like this and need to keep it. I would also like my script to be able to take action based on the contents of the output from the system call. How can I do this?
In my specific case, I'm calling os.system("svn update"). I need the output to go to the screen and (in case of conflicts, for example), the user needs to be able to interact with svn. I would like to be able to have the script take action based on the output - to trigger a build if it sees that a build script was updated, for example.
I'd prefer not to handle the I/O myself (that would seem unnecessarily complex) and I'd rather not send the output to a temporary file that I have to clean up later (though I will if I must).
Edit:
Here's my test script:
#!/usr/bin/python -t
import subprocess
output = subprocess.check_output(["echo","one"])
print "python:", output
output = subprocess.check_output(["echo", "two"], shell=True)
print "python:", output
output = subprocess.check_output("echo three", shell=True)
print "python:", output
and here's its output:
$ ./pytest
python: one
python:
python: three
(There's an extra blank line at the end that the code block doesn't show.) I expect something more like:
$ ./pytest
one
python: one
two
python:
three
python: three
To run a process, I would look into subprocess.check_output. In this case, something like:
output = subprocess.check_output(['svn','update'])
print output
This only works on python2.7 or newer though. If you want a version which works with older versions of python:
p = subprocess.Popen(['svn','update'],stdout=subprocess.PIPE)
output,stderr = p.communicate()
print output

Categories

Resources