I am aware that PyTest captures the output (stdout, stderr, ...) for the tests it executes, and that is an awesome feature I want to keep. However, there is some content that I wish to print to the console immediately from my within conftest.py file, as general information to the person watching the test execution from their terminal. Using a print statement there does not work, as the output of the conftest.py file seems to also be captured, and is only shown if an error happens while executing that file.
Is there a way for me to explicitly bypass this "PyTest output capturing" for a single print statement?
On Unix you could open the controlling terminal and print to it directly:
import os
termfd = open(os.ctermid(), "w")
termfd.write("hello")
termfd.close()
Related
My goal is to have a file with the results from my unit tests including any exceptions logged that arose.
I am executing my python test script using pytest which generates an xml file with the exact details i want. The problem is that this file is generated right after the execution of the test finishes and not during its execution like I want.
Here is the command I execute my test with:
python -m pytest test_signout.py --junitxml ./results/TestsOut.xml -vs
I have tried to redirect the stdout to a file with success but of course this doesn't contain any details related to exceptions such as assertion errors that I want to get.
I have tried the same method for stderr but the file stays empty.
Here is what I am trying to do:
#contextmanager
def redirected_stdout(filename):
original_stderr = sys.stderr
sys.stderr = open(filename, 'at')
try:
yield
finally:
sys.stderr = original_stderr
The way I am using this method is by adding the following command at the start of each of my methods and I then include all of my code within this:
with redirected_stdout("./results/TestsOut.log"):
Is there a way I could get both stderr and stdout logged during runtime or is there another way to achieve what I want?
After I generate this file I am planning as the last step of my test to read it and pass it to the test management solution I use.
UPDATE:
I have changed stderr to stdout and I have added file=sys.stdout at the end of all print statements that are located in except clauses and now these data get printed in the file I want. I don't get the whole output by at least I get the exceptions. If there is a better way to do this please let me know.
is there any way to get a python script to write logs to 2 different locations: file and terminal? But it should only print logs to the terminal if I run it manually. Othwervise when it's run by something else, it should only write logs to a log file. is it possible at all?
You can use the isatty() method that returns True if your program is connected to a tty, else it returns False. So in your case you could code something like this:
import sys
if sys.stdin.isatty():
# Logs put here will be displayed on terminal when you invoke the script via cli
else:
with open("/testFile.txt", "w") as f:
f.write('Logging in a file')
I'm trying to write a python script that returns a value which I can then pass in to a bash script. Thing is that I want a singe value returned in bash, but I want a few things printed to the terminal along the way.
Here is an example script. Let's call it return5.py:
#! /usr/bin/env python
print "hi"
sys.stdout.write(str(5))
what I want is to have this perform this way when I run it from the command line:
~:five=`./return5.py`
hi
~:echo $five
5
but what I get is:
~:five=`./return5.py`
~:echo $five
hi 5
In other words I don't know how to have a python script print and clear the stdout, then assign it to the specific value I want.
Not sure why #yorodm suggests not to use stderr. That's the best option I can think of in this case.
Notice that print will add a newline automatically, but when you use sys.stderr.write, you need to include one yourself with a "\n".
#! /usr/bin/env python
import sys
sys.stderr.write("This is an important message,")
sys.stderr.write(" but I dont want it to be considered")
sys.stderr.write(" part of the output. \n")
sys.stderr.write("It will be printed to the screen.\n")
# The following will be output.
print 5
Using this script looks like this:
bash$ five=`./return5.py`
This is an important message, but I dont want it to be considered part of the output.
It will be printed to the screen.
bash$ echo $five
5
This works because the terminal is really showing you three streams of information : stdout, stdin and stderr. The `cmd` syntax says "capture the stdout from this process", but it doesn't affect what happens to stderr. This was designed exactly for the purpose you're using it for -- communicating information about errors, warnings or what's going on inside the process.
You may not have realized that stdin is also displayed in the terminal, because it's just what shows up when you type. But it wouldn't have to be that way. You could imagine typing into the terminal and having nothing show up. In fact, this is exactly what happens when you type in a password. You're still sending data to stdin, but the terminal is not displaying it.
from my comment..
#!/usr/bin/env python
#foo.py
import sys
print "hi"
sys.exit(5)
then the output
[~] ./foo.py
hi
[~] FIVE=$?
[~] echo $FIVE
5
You can use stdout to output your messages and stderr to capture the values in bash. Unfortunately this is some weird behaviour as stderr is intended for programs to communicate error messages so I strongly advice you against it.
OTOH you can always process your script output in bash
I've got a Python script that uses os.system to run shell commands. The output from those commands is echoed to the screen; I like this and need to keep it. I would also like my script to be able to take action based on the contents of the output from the system call. How can I do this?
In my specific case, I'm calling os.system("svn update"). I need the output to go to the screen and (in case of conflicts, for example), the user needs to be able to interact with svn. I would like to be able to have the script take action based on the output - to trigger a build if it sees that a build script was updated, for example.
I'd prefer not to handle the I/O myself (that would seem unnecessarily complex) and I'd rather not send the output to a temporary file that I have to clean up later (though I will if I must).
Edit:
Here's my test script:
#!/usr/bin/python -t
import subprocess
output = subprocess.check_output(["echo","one"])
print "python:", output
output = subprocess.check_output(["echo", "two"], shell=True)
print "python:", output
output = subprocess.check_output("echo three", shell=True)
print "python:", output
and here's its output:
$ ./pytest
python: one
python:
python: three
(There's an extra blank line at the end that the code block doesn't show.) I expect something more like:
$ ./pytest
one
python: one
two
python:
three
python: three
To run a process, I would look into subprocess.check_output. In this case, something like:
output = subprocess.check_output(['svn','update'])
print output
This only works on python2.7 or newer though. If you want a version which works with older versions of python:
p = subprocess.Popen(['svn','update'],stdout=subprocess.PIPE)
output,stderr = p.communicate()
print output
This question is related to Python: why print statements and subprocess.call() output are out of sync? but the solutions are not working for my particular situation. I am trying to accomplish an automation test process in python that has first prints the name of a Test Suite, then has a for loop which iterate through a list printing the name of a particular test, followed by a subprocess call to execute the test, ended with a print statement that the test is finished. Lastly, I print a statement that says it is the end of the test suite name.
i.e.
=========BEGIN TEST SUITE: =========
---------start test: ----------
subprocess call to execute test
---------end test: ------------
repeat for tests
=========END TEST SUITE: ==========
My current code works fine but when I redirect its output to a file, it puts all the print statements at the bottom. When using the solutions in the other question, it does the complete opposite for me and prints all the print statements first, then executes the test. I tried used the sys.stdout.flush(), as well as turning off the buffering but both give me the same thing. How can I get everything to be printed exactly the way it executes/is in the code when I redirect output or write to a file??
The way I generally solve problems like this is to capture the output in the python program and print it when I want it to print using a Popen
>>> def do_ls():
... print "I'm about to ls"
... ls = subprocess.Popen('ls', stdout=subprocess.PIPE)
... output = ls.communicate()[0] #wait for the process to finish, capture stdout
... print 'ls output:'
... print output
... print 'done.'
I can show you how to do this more specifically for your tests if you post some code