I'm trying to get the executed code out of each Pytest Test case -in order to make a debuger-, now it's possible in Coverage module apis i will use the following code:
cov.start()
assert nrm.mySqrt(x) == 3 // this is a test case using Pytest
cov.stop()
cov.json_report()
cov.erase()// after this line i repeat the steps top
but it's not practical, so if I used the coverage run - m pytest fileName command it will not give me a report for each case, I'm trying to use the plugin in order to apply some special case where if the executed line is including assert I want to get a JSON report and the erase the data, will I be able to do that? or should I try to find a way to implant the code above for each test case?
Related
I am using Pycharm to run my pytest unit tests. I am testing a REST API, so I often have to validate blocks of JSON. When a test fails, I'll see something like this:
FAILED
test_document_api.py:0 (test_create_documents)
{'items': [{'i...ages': 1, ...} != {'items': [{'...ages': 1, ...}
Expected :{'items': [{'...ages': 1, ...}
Actual :{'items': [{'i...ages': 1, ...}
<Click to see difference>
When I click on the "Click to see difference" link, most of the difference is converted to points of ellipses, like so
This is useless since it doesn't show me what is different. I get this behavior for any difference larger than a single string or number.
I assume Pycharm and/or pytest tries to elide uninformative parts of differences for large outputs. However, it's being too aggressive here and eliding everything.
How do I get Pycharm and/or pytest to show me the entire difference?
I've tried adding -vvv to pytest's Additional Arguments, but that has no effect.
Since the original post I verified that I see the same behavior when I run unit tests from the command line. So this is an issue with pytest and not Pycharm.
After looking at the answers I've got so far I guess what I'm really asking is "in pytest is it possible to set maxDiff=None without changing the source code of your tests?" The impression I've gotten from reading about pytest is that the -vv switch is what controls this setting, but this does not appear to be the case.
If you look closely into PyCharm sources, from the whole pytest output, PyCharm uses a single line the to parse the data for displaying in the Click to see difference dialog. This is the AssertionError: <message> line:
def test_spam():
> assert v1 == v2
E AssertionError: assert {'foo': 'bar'} == {'foo': 'baz'}
E Differing items:
E {'foo': 'bar'} != {'foo': 'baz'}
E Use -v to get the full diff
If you want to see the full diff line without truncation, you need to customize this line in the output. For a single test, this can be done by adding a custom message to the assert statement:
def test_eggs():
assert a == b, '{0} != {1}'.format(a, b)
If you want to apply this behaviour to all tests, define custom pytest_assertrepr_compare hook. In the conftest.py file:
# conftest.py
def pytest_assertrepr_compare(config, op, left, right):
if op in ('==', '!='):
return ['{0} {1} {2}'.format(left, op, right)]
The equality comparison of the values will now still be stripped when too long; to show the complete line, you still need to increase the verbosity with -vv flag.
Now the equality comparison of the values in the AssertionError line will not be stripped and the full diff is displayed in the Click to see difference dialog, highlighting the diff parts:
Being that pytest integrates with unittest, as a workaround you may be able to set it up as a unittest and then set Test.maxDiff = None or per each specific test self.maxDiff = None
https://docs.pytest.org/en/latest/index.html
Can run unittest (including trial) and nose test suites out of the box;
These may be helpful as well...
https://stackoverflow.com/a/21615720/9530790
https://stackoverflow.com/a/23617918/9530790
Had a look in the pytest code base and maybe you can try some of these out:
1) Set verbosity level in the test execution:
./app_main --pytest --verbose test-suite/
2) Add environment variable for "CI" or "BUILD_NUMBER". In the link to
the truncate file you can see that these env variables are used to
determine whether or not the truncation block is run.
import os
os.environ["BUILD_NUMBER"] = '1'
os.environ["CI"] = 'CI_BUILD'
3) Attempt to set DEFAULT_MAX_LINES and DEFAULT_MAX_CHARS on the truncate module (Not recommending this since it uses a private module):
from _pytest.assertion import truncate
truncate.DEFAULT_MAX_CHARS = 1000
truncate.DEFAULT_MAX_LINES = 1000
According to the code the -vv option should work so it's strange that it's not for you:
Current default behaviour is to truncate assertion explanations at
~8 terminal lines, unless running in "-vv" mode or running on CI.
Pytest truncation file which are what I'm basing my answers off of: pytest/truncate.py
Hope something here helps you!
I have some getattr in assertion and it never shows anything after AssertionError.
I add -lv in the Additional Arguments field to show the local variables.
I was running into something similar and created a function that returns a string with a nice diff of the two dicts. In pytest style test this looks like:
assert expected == actual, build_diff_string(expected, actual)
And in unittest style
self.assertEqual(expected, actual, build_diff_string(expected, actual)
The downside is that you have to modify the all the tests that have this issue, but it's a Keep It Simple and Stupid solution.
In pycharm you can just put -vv in the run configuration Additional Arguments field and this should solve the issue.
Or at least, it worked on my machine...
I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?
(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something
Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.
Just run the test with --last-failed option (you might need pytest)
I'm trying to run a py.test cov for my program, but I still have an information: testFile.txt sCoverage.py warning: No data was collected.
even when in the code are still non-tested functions (in my example function diff). Below is the example of the code on which I tested the command py.test --cov=testcov.py. I'm using python 2.7.9
def suma(x,y):
z = x + y
return z
def diff(x,y):
return x-y
if __name__ == "__main__":
a = suma(2,3)
b = diff(7,5)
print a
print b
## ------------------------TESTS-----------------------------
import pytest
def testSuma():
assert suma(2,3) == 5
Can someone explain me, what am I doing wrong?
You haven't said what all your files are named, so I'm not sure of the precise answer. But the argument to --cov should be a module name, not a file name. So instead of py.test --cov=testcov.py, you want py.test --cov=testcov.
What worked well for me is:
py.test mytests/test_mytest.py --cov='.'
Specifying the path, '.' in this case, removes unwanted files from the coverage report.
py.test looks for functions that start with test_. You should rename your test functions accordingly. To apply coverage you execute py.test --cov. If you want a nice HTML report that also shows you which lines are not covered you can use py.test --cov --cov-report html.
By default py.test looks for files matching test_*.py. You can customize it with pytest.ini
Btw. According to python style guide PEP 8 it should be test_suma - but it has no impact on py.test.
My tests clearly execute each function, and there are no unused imports either. Yet, according to the coverage report, 62% of the code was never executed in the following file:
Can someone please point out what I might be doing wrong?
Here's how I initialise the test suite and the coverage:
cov = coverage(branch=True, omit=['website/*', 'run_test_suite.py'])
cov.start()
try:
unittest.main(argv=[sys.argv[0]])
except:
pass
cov.stop()
cov.save()
print "\n\nCoverage Report:\n"
cov.report()
print "HTML version: " + os.path.join(BASEDIR, "tmp/coverage/index.html")
cov.html_report(directory='tmp/coverage')
cov.erase()
This is the third question in the coverage.py FAQ:
Q: Why do the bodies of functions (or classes) show as executed, but
the def lines do not?
This happens because coverage is started after the functions are
defined. The definition lines are executed without coverage
measurement, then coverage is started, then the function is called.
This means the body is measured, but the definition of the function
itself is not.
To fix this, start coverage earlier. If you use the command line to
run your program with coverage, then your entire program will be
monitored. If you are using the API, you need to call coverage.start()
before importing the modules that define your functions.
The simplest thing to do is run you tests under coverage:
$ coverage run -m unittest discover
Your custom test script isn't doing much beyond what the coverage command line would do, it will be simpler just to use the command line.
For excluding the imports statements, you can add the following lines to .coveragerc
[report]
exclude_lines =
# Ignore imports
from
import
but when I tried to add '#' for decorators, the source code within the scope of decorators was excluded. The coverage rate was wrong.
There may be some other ways to exclude decorators.
I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.