I'm using pytest to run tests on my python scripts.
In one case I need to compare text output of a script which contains both spaces and tabs.
By default, pytest doesn't seem to use special characters for whitespace and tabs, which makes comparing errors very difficult.
Every time I have to copy the error into my editor to find the difference.
Is it possible to make pytest more distinctly represent invisible characters?
Here is what I mean:
First line containing 4 spaces, second line has a tab and last line is empty.
You should use following hook in your conftest.py file to define custom assertion failure output:
TAB_REPLACEMENT = "[tab]"
SPACE_REPLACEMENT = "[space]"
def pytest_assertrepr_compare(config, op, left, right):
left = left.replace(" ", SPACE_REPLACEMENT).replace("\t", TAB_REPLACEMENT)
right = right.replace(" ", SPACE_REPLACEMENT).replace("\t", TAB_REPLACEMENT)
return [f"{left} {op} {right} failed!"]
Test example:
def test_1():
assert " " == " "
Output:
=== test session starts ===
platform win32 -- Python 3.10.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
collected 1 item
test.py F [100%]
=== FAILURES ===
___ test_1 ___
def test_1():
> assert " " == " "
E assert [space][space][space][space] == [tab] failed!
test.py:2: AssertionError
=== short test summary info ===
FAILED test.py::test_1 - assert [space][space][space][space] == [tab] failed!
=== 1 failed in 0.20s ===
So you can replace [tab] and [space] with any symbol you want.
Related
I am trying to capture the return value of a PyTest. I am running these tests programmatically, and I want to return relevant information when the test fails.
I thought I could perhaps return the value of kernel as follows such that I can print that information later when listing failed tests:
def test_eval(test_input, expected):
kernel = os.system("uname -r")
assert eval(test_input) == expected, kernel
This doens't work. When I am later looping through the TestReports which are generated, there is no way to access any return information. The only information available in the TestReport is the name of the test and a True/False.
For example one of the test reports looks as follows:
<TestReport 'test_simulation.py::test_host_has_correct_kernel_version[simulation-host]' when='call' outcome='failed'>
Is there a way to return a value after the assert fails, back to the TestReport? I have tried doing this with PyTest plugins but have been unsuccessful.
Here is the code I am using to run the tests programmatically. You can see where I am trying to access the return value.
import pytest
from util import bcolors
class Plugin:
def __init__(self):
self.passed_tests = set()
self.skipped_tests = set()
self.failed_tests = set()
self.unknown_tests = set()
def pytest_runtest_logreport(self, report):
print(report)
if report.passed:
self.passed_tests.add(report)
elif report.skipped:
self.skipped_tests.add(report)
elif report.failed:
self.failed_tests.add(report)
else:
self.unknown_tests.add(report)
if __name__ == "__main__":
plugin = Plugin()
pytest.main(["-s", "-p", "no:terminal"], plugins=[plugin])
for passed in plugin.passed_tests:
result = passed.nodeid
print(bcolors.OKGREEN + "[OK]\t" + bcolors.ENDC + result)
for skipped in plugin.skipped_tests:
result = skipped.nodeid
print(bcolors.OKBLUE + "[SKIPPED]\t" + bcolors.ENDC + result)
for failed in plugin.failed_tests:
result = failed.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
for unknown in plugin.unknown_tests:
result = unknown.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
The goal is to be able to print out "extra context information" when printing the FAILED tests, so that there is information immediately available to help debug why the test is failing.
You can extract failure details from the raised AssertionError in the custom pytest_exception_interact hookimpl. Example:
# conftest.py
def pytest_exception_interact(node, call, report):
# assertion message should be parsed here
# because pytest rewrites assert statements in bytecode
message = call.excinfo.value.args[0]
lines = message.split()
kernel = lines[0]
report.sections.append((
'Kernels reported in assert failures:',
f'{report.nodeid} reported {kernel}'
))
Running a test module
import subprocess
def test_bacon():
assert True
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
assert 0 == 1, kernel
yields:
test_spam.py::test_bacon PASSED [ 50%]
test_spam.py::test_eggs FAILED [100%]
=================================== FAILURES ===================================
__________________________________ test_eggs ___________________________________
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
> assert 0 == 1, kernel
E AssertionError: 5.5.15-200.fc31.x86_64
E
E assert 0 == 1
E +0
E -1
test_spam.py:12: AssertionError
--------------------- Kernels reported in assert failures: ---------------------
test_spam.py::test_eggs reported 5.5.15-200.fc31.x86_64
=========================== short test summary info ============================
FAILED test_spam.py::test_eggs - AssertionError: 5.5.15-200.fc31.x86_64
========================= 1 failed, 1 passed in 0.05s ==========================
When a test is xfailed the reason that is printed reports about test file, test class and test case, while the skipped test case reports only test file and a line where skip is called.
Here is a test example:
#!/usr/bin/env pytest
import pytest
#pytest.mark.xfail(reason="Reason of failure")
def test_1():
pytest.fail("This will fail here")
#pytest.mark.skip(reason="Reason of skipping")
def test_2():
pytest.fail("This will fail here")
This is the actual result:
pytest test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
SKIPPED [1] test_file.py:9: Reason of skipping
XFAIL test_file.py::test_1
Reason of failure
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
But I would expect to get something like:
pytest test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
XFAIL test_file.py::test_1: Reason of failure
SKIPPED test_file.py::test_2: Reason of skipping
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
You have two possible ways to achieve this. The quick and dirty way: just redefine _pytest.skipping.show_xfailed in your test_file.py:
import _pytest
def custom_show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
for rep in xfailed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL %s" % (pos,)
if reason:
s += ": " + str(reason)
lines.append(s)
# show_xfailed_bkp = _pytest.skipping.show_xfailed
_pytest.skipping.show_xfailed = custom_show_xfailed
... your tests
The (not so) clean way: create a conftest.py file in the same directory as your test_file.py, and add a hook:
import pytest
import _pytest
def custom_show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
for rep in xfailed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL %s" % (pos,)
if reason:
s += ": " + str(reason)
lines.append(s)
#pytest.hookimpl(tryfirst=True)
def pytest_terminal_summary(terminalreporter):
tr = terminalreporter
if not tr.reportchars:
return
lines = []
for char in tr.reportchars:
if char == "x":
custom_show_xfailed(terminalreporter, lines)
elif char == "X":
_pytest.skipping.show_xpassed(terminalreporter, lines)
elif char in "fF":
_pytest.skipping.show_simple(terminalreporter, lines, 'failed', "FAIL %s")
elif char in "sS":
_pytest.skipping.show_skipped(terminalreporter, lines)
elif char == "E":
_pytest.skipping.show_simple(terminalreporter, lines, 'error', "ERROR %s")
elif char == 'p':
_pytest.skipping.show_simple(terminalreporter, lines, 'passed', "PASSED %s")
if lines:
tr._tw.sep("=", "short test summary info")
for line in lines:
tr._tw.line(line)
tr.reportchars = [] # to avoid further output
The second method is overkill, because you have to redefine the whole pytest_terminal_summary.
Thanks to this answer I've found the following solution that works perfectly for me.
I've created conftest.py file in the root of my test suite with the following content:
import _pytest.skipping as s
def show_xfailed(tr, lines):
for rep in tr.stats.get("xfailed", []):
pos = tr.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
s = "XFAIL\t%s" % pos
if reason:
s += ": " + str(reason)
lines.append(s)
s.REPORTCHAR_ACTIONS["x"] = show_xfailed
def show_skipped(tr, lines):
for rep in tr.stats.get("skipped", []):
pos = tr.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.longrepr[-1]
if reason.startswith("Skipped: "):
reason = reason[9:]
verbose_word = s._get_report_str(tr.config, report=rep)
lines.append("%s\t%s: %s" % (verbose_word, pos, reason))
s.REPORTCHAR_ACTIONS["s"] = show_skipped
s.REPORTCHAR_ACTIONS["S"] = show_skipped
And now I'm getting to following output:
./test_file.py -rsx
============================= test session starts =============================
platform linux -- Python 3.5.2, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: /home/ashot/questions
collected 2 items
test_file.py xs [100%]
=========================== short test summary info ===========================
SKIPPED test_file.py::test_2: Reason of skipping
XFAIL test_file.py::test_1: Reason of failure
==================== 1 skipped, 1 xfailed in 0.05 seconds =====================
I want to get a list of all tests (e.g. in the form of a py.test TestReport) at the end of all tests.
I know that pytest_runtest_makereportdoes something similar, but only for a single test. But I want to implement a hook or something in conftest.py to process the whole list of tests before the py.test application terminates.
Is there a way to do this?
Here an example which can help you. Structure of files:
/example:
__init__.py # empty file
/test_pack_1
__init__.py # empty file
conftest.py # pytest hooks
test_my.py # a few tests for demonstration
There are 2 tests in test_my.py:
def test_one():
assert 1 == 1
print('1==1')
def test_two():
assert 1 == 2
print('1!=2')
Example of conftest.py:
import pytest
from _pytest.runner import TestReport
from _pytest.terminal import TerminalReporter
#pytest.hookimpl(hookwrapper=True)
def pytest_terminal_summary(terminalreporter): # type: (TerminalReporter) -> generator
yield
# you can do here anything - I just print report info
print('*' * 8 + 'HERE CUSTOM LOGIC' + '*' * 8)
for failed in terminalreporter.stats.get('failed', []): # type: TestReport
print('failed! node_id:%s, duration: %s, details: %s' % (failed.nodeid,
failed.duration,
str(failed.longrepr)))
for passed in terminalreporter.stats.get('passed', []): # type: TestReport
print('passed! node_id:%s, duration: %s, details: %s' % (passed.nodeid,
passed.duration,
str(passed.longrepr)))
Documentation says that pytest_terminal_summary has exitstatus arg
Run tests without any additional options: py.test ./example. Example of output:
example/test_pack_1/test_my.py .F
********HERE CUSTOM LOGIC********
failed! node_id:test_pack_1/test_my.py::test_two, duration: 0.000385999679565, details: def test_two():
> assert 1 == 2
E assert 1 == 2
example/test_pack_1/test_my.py:7: AssertionError
passed! node_id:test_pack_1/test_my.py::test_one, duration: 0.00019907951355, details: None
=================================== FAILURES ===================================
___________________________________ test_two ___________________________________
def test_two():
> assert 1 == 2
E assert 1 == 2
example/test_pack_1/test_my.py:7: AssertionError
====================== 1 failed, 1 passed in 0.01 seconds ======================
Hope this helps.
Note! Make sure that .pyc files was removed before running tests
I encounter a quite strange issue with comparison.
I want to decode python script arguments, store them and analyse them. In the following command, the -r option shall be used to determine which type of report to create.
python script launching :
%run decodage_parametres_script.py -r junit,html
python script options parsing is used to fill a dictionary, the result is:
current_options :
{'-cli-no-summary': False, '-cli-silent': False, '-r': ['junit', 'html']}
Then I want to test the -r options, here is the code:
for i in current_options['-r']:
# for each reporter required with the -r option:
# - check that a file path has been configured (otherwise set default)
# - create the file and initialize fields
print("trace i", i)
print("trace current_options['-r'] = ", current_options['-r'])
print("trace current_options['-r'][0] = ", current_options['-r'][0])
if current_options['-r'][i] == 'junit':
# request for a xml report file
print("request xml export")
try:
xml_file_path = current_option['--reporter-junit-export']
print("xml file path = ", xml_file_path)
except:
# missing file configuration
print("xml option - missing file path information")
timestamp = get_timestamp()
xml_file_path = 'default_report' + '_' + timestamp + '.xml'
print("xml file path = ", xml_file_path)
if xml_file_path is not None:
touch(xml_file_path)
print("xml file path = ", xml_file_path)
else:
print('ERROR: Empty --reporter-junit-export path')
sys.exit(0)
else:
print("no xml file required")
I want to try the default report generation but I don't even hit the print("request xml export") line, here is the console result :
trace i junit
trace current_options['-r'] = ['junit', 'html']
trace current_options['-r'][0] = junit
As I guess it could be a type issue, I tried the following tests:
In [557]: for i in current_options['-r']:
...: print(i, type(i))
...:
junit <class 'str'>
html <class 'str'>
In [558]: toto = 'junit'
In [559]: type(toto)
Out[559]: str
In [560]: toto
Out[560]: 'junit'
In [561]: toto == current_options['-r'][0]
Out[561]: True
so my line assertion if current_options['-r'][i] == 'junit': should end up begin True but it's not the case.
Am I missing something trivial ??? :(
Can someone help me, please ?
You are iterating by array of strings
for i in current_options['-r']:
in your case i will be:
junit on first iteration
html on next iteration
and your if condition (from interpreter perspective) will looks like:
if current_options['-r']['junit'] == 'junit':
instead of expected:
if current_options['-r'][0] == 'junit':
Solution 1:
You need iterate through of range(len(current_options['-r']))
Solution 2
change your comparator:
from
if current_options['-r'][i] == 'junit':
to
if i == 'junit':
I have two files, one with a list of keywords/strings:
blue fox
the
lazy dog
orange
of
file
Another, with text:
The blue fox jumped
over the lazy dog
this file has nothing important
lines repeat
this line does not match
I want to take the list of strings in the first file and find lines from second file that match any of the strings from the first. So I wrote a Pig script with a Python UDF:
register match.py using jython as match;
A = LOAD 'words.txt' AS (word:chararray);
B = LOAD 'text.txt' AS (line:chararray);
C = GROUP A ALL;
D = FOREACH B generate match.match(C.$1,line);
dump D;
#match.py
#outputSchema("str:chararray")
def match(wordlist,line):
linestr = str(line)
for word in wordlist:
wordstr = str(word)
if re.search(wordstr,linestr):
return line
Ends in error:
"2014-04-01 06:22:34,775 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias D. Backend error : Error executing function"
Detailed Error log:
Backend error message
---------------------
org.apache.pig.backend.executionengine.ExecException: ERROR 0: Error executing function
at org.apache.pig.scripting.jython.JythonFunction.exec(JythonFunction.java:120)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:337)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:434)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:340)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:372)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:297)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:283)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:278)
at o
Pig Stack Trace
---------------
ERROR 1066: Unable to open iterator for alias D. Backend error : Error executing function
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias D. Backend error : Error executing function
at org.apache.pig.PigServer.openIterator(PigServer.java:828)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:696)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:320)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:194)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:538)
at org.apache.pig.Main.main(Main.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: Error executing function
at org.apache.pig.scripting.jython.JythonFunction.exec(JythonFunction.java:120)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:337)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:434)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:340)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:372)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:297)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:283)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:278)
================================================================================
I suspect the "re" module isn't available to jython in my CDH4.x cluster. I did not spend much time on the python UDF. I solved it by writing a Java UDF. Pardon my Java since I am a n00b, may not be the most efficient or most pretty Java code (and some bugs in there, I am sure):
package pigext;
import java.util.regex.Pattern;
import java.util.regex.Matcher;
import java.io.IOException;
import java.util.*;
import org.apache.pig.FilterFunc;
import org.apache.pig.data.Tuple;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.DataBag;
import org.apache.pig.data.DataType;
public class matchList extends EvalFunc<String> {
public String exec(Tuple input) throws IOException {
try {
String line = (String)input.get(0);
DataBag bag = (DataBag)input.get(1);
Iterator it = bag.iterator();
String output = "";
while (it.hasNext()){
Tuple t = (Tuple)it.next();
if (t != null && t.size() > 0 && t.get(0) != null && line != null )
{
String cmd = t.get(0).toString();
if ( line.toLowerCase().matches(cmd.toLowerCase()) ) {
return (line + "," + cmd);
}
}
}
return output;
} catch (Exception e) {
throw new IOException("Failed to process row", e);
}
} }
The way to use it is have a file filled with regex, one per line, that you want to search for and obviously your target text file. So a regex file "wordstext.txt" as:
.*?this +blah.*?
And, your text file,text.txt, is:
this blah starts with blah
this blah has way too many spaces
that won't match
thisblahshouldnotmatch
thisblah should not match either
the line here is this blah
line here has this blah in the middle
line here has this blah with extra spaces
only has blah
only has this
The pig script would be:
REGISTER pigext.jar;
A = LOAD 'wordstest.txt' AS (cmd:chararray);
B = LOAD 'text.txt' AS (line:chararray);
C = GROUP A ALL;
D = FOREACH B generate pigext.matchList(line,C.$1);
dump D;