I'm trying to collect some stats of how many test cases there are per feature or team, and want to do this only during collection, not when everyone in my company runs their tests. I'm looking for a way to run a function only if --collect-only was used in the py.test command.
Goal:
Run some code only if --collect-only was used
I need access to the
test items data structure (i.e. I need all of the markers that were
used)
Currently, I am doing this via pytest_collection_modifyitems hook:
def pytest_collection_modifyitems(config, items):
# This hook runs after collection
for item in items:
teams = [mark.name for mark in item.iter_markers() if mark.name.startswith('team_')]
features = [mark.name for mark in item.iter_markers() if mark.name.startswith('feature_')]
# do something with these markers
Is there a way to run this code above if and only if py.test was run with --collect-only? If anyone has suggestions on better ways to do this, please help!
Thanks!
Just check whether the flag is set in the config object. Example with your hookimpl:
def pytest_collection_modifyitems(config, items):
if config.option.collectonly:
print("I will run only with --collectonly flag")
else:
print("I will run only without --collectonly flag")
Related
I'm using #pytest.mark to uniquely identify specific tests, therefore I created my custom marker.
#pytest.mark.key
I'm using it as such:
#pytest.mark.key("test-001")
def test_simple(self):
self.passing_step()
self.passing_step()
self.passing_step()
self.passing_step()
assert True
Now from the console I would like to run all tests with the marked key "test-001". How can I achieve this?
What i'm looking for is something like this:
pypi.org/project/pytest-jira/0.3.6
where a test can be mapped to a Jira key. I looked at the source code for the link but i'm unsure how to achieve it in order for me to run specific tests. Say I only wanna run the test with the key "test-001".
Pytest does not provide this out of the box. You can filter by marker names using the -m option, but not by marker attributes.
You can add your own option to filter by keys, however. Here is an example:
conftest.py
def pytest_configure(config):
# register your new marker to avoid warnings
config.addinivalue_line(
"markers",
"key: specify a test key"
)
def pytest_addoption(parser):
# add your new filter option (you can name it whatever you want)
parser.addoption('--key', action='store')
def pytest_collection_modifyitems(config, items):
# check if you got an option like --key=test-001
filter = config.getoption("--key")
if filter:
new_items = []
for item in items:
mark = item.get_closest_marker("key")
if mark and mark.args and mark.args[0] == filter:
# collect all items that have a key marker with that value
new_items.append(item)
items[:] = new_items
Now you run something like
pytest --key=test-001
to only run the tests with that marker attribute.
Note that this will still show the overall number of tests as collected, but run only the filtered ones. Here is an example:
test_key.py
import pytest
#pytest.mark.key("test-001")
def test_simple1():
pass
#pytest.mark.key("test-002")
def test_simple2():
pass
#pytest.mark.key("test-001")
def test_simple3():
pass
def test_simple4():
pass
$ python -m pytest -v --key=test-001 test_key.py
...
collected 4 items
test_key.py::test_simple1 PASSED
test_key.py::test_simple3 PASSED
================================================== 2 passed in 0.26s ==================================================
you can run pytest with -m option c
check below command:
pytest -m 'test-001' <your test file>
I have different suites of tests.
There are normal tests, there are slow tests, and there are backend tests (and arbitrarily more "special" tests that may need to be skipped unless specified).
I would like to always skip slow and backend tests, unless --run-slow and/or --run-backend CLI options are passed.
If only --run-slow is passed, I will run all normal tests plus "slow" tests (eg those tests marked #pytest.mark.slow). Same thing for --run-backend; and if both CLI options are passed then run all normal tests + slow tests + backend tests.
I followed the pytest Control skipping of tests according to command line option pattern from the docs, but find myself not yet knowledgeable enough of pytest to extend it to having multiple skip CLI options.
Can anyone help me out?
Reference code snippet:
def pytest_addoption(parser):
parser.addoption("--run-slow", action="store_true", default=False, help="run slow tests")
parser.addoption("--run-backend", action="store_true", default=False, help="run backend tests")
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test to run slow")
config.addinivalue_line("markers", "backend: mark test as backend")
def pytest_collection_modifyitems(config, items):
if config.getoption("--run-slow") or config.getoption("--run-backend"): # <-- this line logic is bad!
return
skip = pytest.mark.skip(reason="need '--run-*' option to run")
for item in items:
if "slow" in item.keywords or "backend" in item.keywords:
item.add_marker(skip)
Thank you!
If I understand the question correctly, you are asking for the logic in pytest_collection_modifyitems. You could just change your code to something like:
conftest.py
def pytest_collection_modifyitems(config, items):
run_slow = config.getoption("--run-slow")
run_backend = config.getoption("--run-backend")
skip = pytest.mark.skip(reason="need '--run-*' option to run")
for item in items:
if (not run_slow and "slow" in item.keywords or
not run_backend and "backend" in item.keywords):
item.add_marker(skip)
This would skip all tests marked as slow or backend by default. If adding the command line options --run-slow and/or --run-backend, the respective tests would not be skipped.
Another option (as mentioned in the comments) would be filtering by markers. In this case, instead of adding your own command line options, you could just filter tests by markers, e.g. use pytest -m "not (slow or backend)" to not run tests with these markers, and use pytest -m "not slow" and pytest -m "not backend" if you want to skip tests only for one marker. In this case you don't have to implement pytest_collection_modifyitems, but the default behavior would be to run all tests.
Please bear with me while I try to explain my predicament, I'm still a Python novice and so my terminology may not be correct. Also I'm sorry for the inevitable long-windedness of this post, but I'll try to expalin in as much relevant detail as possible.
A quick rundown:
I'm currently developing a suite of Selenium tests for a set of websites that are essentially the same in functionality, using py.test
Tests results are uploaded to TestRail, using the pytest plugin pytest-testrail.
Tests are tagged with the decorator #pytestrail.case(id) with a unique case ID
A typical test of mine looks like this:
#pytestrail.case('C100123') # associates the function with the relevant TR case
#pytest.mark.usefixtures()
def test_login():
# test code goes here
As I mentioned before, I'm aiming to create one set of code that handles a number of our websites with (virtually) identical functionality, so a hardcoded decorator in the example above won't work.
I tried a data driven approach with a csv and a list of the tests and their case IDs in TestRail.
Example:
website1.csv:
Case ID | Test name
C100123 | test_login
website2.csv:
Case ID | Test name
C222123 | test_login
The code I wrote would use the inspect module to find the name of the test running, find the relevant test ID and put that into a variable called test_id:
import csv
import inspect
class trp(object):
def __init__(self):
pass
with open(testcsv) as f: # testcsv could be website1.csv or website2.csv
reader = csv.reader(f)
next(reader) # skip header
tests = [r for r in reader]
def gettestcase(self):
self.current_test = inspect.stack()[3][3]
for row in trp.tests:
if self.current_test == row[2]:
self.test_id = (row[0])
print(self.test_id)
return self.test_id, self.current_test
def gettestid(self):
self.gettestcase()
The idea was that the decorator would change dynamically based on the csv that I was using at the time.
#pytestrail.case(test_id) # now a variable
#pytest.mark.usefixtures()
def test_login():
trp.gettestid()
# test code goes here
So if I ran test_login for website1, the decorator would look like:
#pytestrail.case('C100123')
and if I ran test_login for website2 the decorator would be:
#pytestrail.case('C222123')
I felt mighty proud of coming up with this solution by myself and tried it out...it didn't work. While the code does work by itself, I would get an exception because test_id is undefined (I understand why - gettestcase is executed after the decorator, so of course it would crash.
The only other way I can handle this is to apply the csv and testIDs before any test code is executed. My question is - how would I know how to associate the tests with their test IDs? What would an elegant, minimal solution to this be?
Sorry for the long winded question. I'll be watching closely to answer any questions if you need more explanation.
pytest is very good at doing all kind of metaprogramming stuff for the tests. If I understand your question correctly, the code below will do the dynamic tests marking with pytestrail.case marker. In the project root dir, create a file named conftest.py and place this code in it:
import csv
from pytest_testrail.plugin import pytestrail
with open('website1.csv') as f:
reader = csv.reader(f)
next(reader)
tests = [r for r in reader]
def pytest_collection_modifyitems(items):
for item in items:
for testid, testname in tests:
if item.name == testname:
item.add_marker(pytestrail.case(testid))
Now you don't need to mark the test with #pytestrail.case()at all - just write the rest of code and pytest will take care of the marking:
def test_login():
assert True
When pytest starts, the code above will read website1.csv and store the test IDs and names just as you did in your code. Before the test run starts, pytest_collection_modifyitems hook will execute, analyzing the collected tests - if a test has the same name as in csv file, pytest will add the pytestrail.case marker with the test ID to it.
I believe the reason this isn't working as you would expect has to do with how python reads and executes files. When python starts executing it reads in the linked python file(s) and executes each line one-by-one, in turn. For things at the 'root' indention level (function/class definitions, decorators, variable assignments, etc) this means they get run exactly one time as they are loaded in, and never again. In your case, the python interpreter reads in the pytest-testrail decorator, then the pytest decorator, and finally the function definition, executing each one once, ever.
(Side note, this is why you should never use mutable objects as function argument defaults: Common Gotchas)
Given that you want to first deduce the current test name, then associate that with a test case ID, and finally use that ID with the decorator, I'm not sure that is possible with pytest-testrail's current functionality. At least, not possible without some esoteric and difficult to debug/maintain hack using descriptors or the like.
I think you realistically have one option: use a different TestRail client and update your pytest structure to use the new client. Two python clients I can recommend are testrail-python and TRAW (TestRail Api Wrapper)(*)
It will take more work on your part to create the fixtures for starting a run, updating results, and closing the run, but I think in the end you will have a more portable suite of tests and better results reporting.
(*) full disclosure: I am the creator/maintainer of TRAW, and also made significant contributions to testrail-python
Say I want to grade some student python code using tests, something like (this is pseudo-code I wish I could write):
code = __import__("student_code") # Import the code to be tested
grade = 100
for test in all_tests(): # Loop over the tests that were gathered
good = perform(test, code) # Perform the test individually on the code
if not good: # Do something if the code gives the wrong result
grade -= 1
For that, I would like to use pytest (easy to write good tests), but there are many things I don't know how to do:
how to run tests on external code? (here the code imported from student's code)
how to list all the tests available? (here all_tests())
how to run them individually on code? (here perform(test, code))
I couldn't find anything related to this user-case (pytest.main() does not seem to do the trick anyhow...)
I hope you see my point, cheers!
EDIT
I finally found how to perform my 1st point (apply tests on external code). In the repository where you want to perform the tests, generate a conftest.py file with:
import imp # Import standard library
import pytest
def pytest_addoption(parser):
"""Add a custom command-line option to py.test."""
parser.addoption("--module", help="Code file to be tested.")
#pytest.fixture(scope='session')
def module(request):
"""Import code specified with the command-line custom option '--module'."""
codename = request.config.getoption("--module")
# Import module (standard __import__ does not support import by filename)
try:
code = imp.load_source('code', codename)
except Exception as err:
print "ERROR while importing {!r}".format(codename)
raise
return code
Then, gather your tests in a tests.py file, using the module fixture:
def test_sample(module):
assert module.add(1, 2) == 3
Finally, run the tests with py.test tests.py --module student.py.
I'm still working on points 2 and 3.
EDIT 2
I uploaded my (incomplete) take at this question:
https://gitlab.in2p3.fr/ycopin/pyTestExam
Help & contributions are welcome!
Very cool and interesting project. It's difficult to answer without knowing more.
Basically you should be able to do this by writing a custom plugin. Probably something you could place in a conftest.py in a test or project folder with your unittest subfolder and a subfolder for each student.
Probably would want to write two plugins:
One to allow weighting of tests (e.g. test_foo_10 and and test_bar_5) and calculation of final grade (e.g. 490/520) (teamcity-messages is an example that uses the same hooks)
Another to allow distribution of test to separate processes. (xdist as an example)
I know this is not a very complete answer but I wanted to at least point out the last point. Since there would be a very high probability that students would be using overlapping module names, they would clash in a pytest world where tests are collected first and then run in a process that would attempt to not re-import modules with a common name.
Even if you attempt to control for that, you will eventually have a student manipulate the global namespace in a sloppy way that could cause another students code to fail. You will, for that reason, need either a bash script to run each students file or a plugin that would run them in separate processes.
Make this use case a graded take-home exam and see what they come up with :-) ... naaah ... but you could ;-)
I come up with something like this (whre it is assument that the sum function is the student code):
import unittest
score = 0
class TestAndGrade(unittest.TestCase):
def test_correctness(self):
self.assertEqual(sum([2,2]), 4)
global score; score += 6 # Increase score
def test_edge_cases(self):
with self.assertRaises(Exception):
sum([2,'hello'])
global score; score += 1 # Increase score
# Print the score
#classmethod
def tearDownClass(cls):
global score
print('\n\n-------------')
print('| Score: {} |'.format(score))
print('-------------\n')
# Run the tests
unittest.main()
In Robot Framework, the execution status for each test case can be either PASS or FAIL. But I have a specific requirement to mark few tests as NOT EXECUTED when it fails due to dependencies.
I'm not sure on how to achieve this. I need expert's advise for me to move ahead.
Until a SKIP status is implemented, you can use exitonfailure to stop further execution if a critical test failed, and then change the output.xml (and the tests results.html) to show those tests as "NOT_RUN" (gray color), rather than "FAILED" (red color).
Here's an example (Tested on RobotFramework 3.1.1 and Python 3.6):
First create a new class that extends the abstract class ResultVisitor:
class ResultSkippedAfterCritical(ResultVisitor):
def visit_suite(self, suite):
suite.set_criticality(critical_tags='Critical')
for test in suite.tests:
if test.status == 'FAIL' and "Critical failure occurred" in test.message:
test.status = 'NOT_RUN'
test.message = 'Skipping test execution after critical failure.'
Assuming you've already created the suite (for example with TestSuiteBuilder()), run it without creating report.html and log.html:
outputDir = suite.name.replace(" ", "_")
outputFile = "output.xml"
logger.info(F"Running Test Suite: {suite.name}", also_console=True)
result = suite.run(output=outputFile, outputdir=outputDir, \
report=None, log=None, critical='Critical', exitonfailure=True)
Notice that I've used "Critical" as the identifing tag for critical tests, and exitonfailure option.
Then, revisit the output.xml, and create report.html and log.html from it:
revisitOutputFile = os.path.join(outputDir, outputFile)
logger.info(F"Checking skipped tests in {revisitOutputFile} due to critical failures", also_console=True)
result = ExecutionResult(revisitOutputFile)
result.visit(ResultSkippedAfterCritical())
result.save(revisitOutputFile)
reportFile = 'report.html'
logFile = 'log.html'
logger.info(F"Generating {reportFile} and {logFile}", also_console=True)
writer = ResultWriter(result)
writer.write_results(outputdir=outputDir, report=reportFile, log=logFile)
It should display all the tests after the critical failure with grayed status = "NOT_RUN":
There is nothing you can do, robot only supports two values for the test status: pass and fail. You can mark a test as non-critical so it won't break the build, but it will still show up in logs and reports as having been run.
The robot core team has said they will not support this feature. See issue 1732 for more information.
Even though robot doesn't support the notion of skipped tests, you have the option to write a script that scans output.xml and removes tests that you somehow marked as skipped (perhaps by adding a tag to the test). You will also have to adjust the counts of the failed tests in the xml. Once you've modified the output.xml file, you can use rebot to regenerate the log and report files.
If you only need the change to be made for your log/report files you should take a look here for implementing a SuiteVisitor for the --prerebotmodifier option. As stated by Bryan Oakley, this might screw up your pass/fail count if you don't keep that in mind.
Currently it doesn't seem to be possible to actually alter the test-status before output.xml is created, but there are plans to implement it in RF 3.0. And there is a discussion for a skip status
Another more complex solution would be to create your own output file through implementing a listener to use with the --listener option that creates an output file as it fits your needs (possibly alongside with the original output.xml).
There is also the possibility to set tags during test execution, but im not familar with that yet so I can't really tell anything about that atm. That might be another possibility to account for those dependency-failures, as there are options to ignore certain tagged keywords for the log/report generation
I solved it this way:
Run Keyword If ${blabla}==${True} do-this-task ELSE log to console ${PREV_TEST_STATUS}${yellow}| NRUN |
test not executed and marked as NRUN
Actually, you can SET TAG to run whatever keyword you like (for sanity testing, regression testing...)
Just go to your test script configuration and set tags
And whenever you want to run, just go to Run tab and select check-box Only run tests with these tags / Skip tests with these tags
And click Start button :) Robot framework will select any keyword that match and run it.
Sorry, I don't have enough reputation to post images :(