Most test frameworks assume that "1 test = 1 Python method/function",
and consider a test as passed when the function executes without
raising assertions.
I'm testing a compiler-like program (a program that reads *.foo
files and process their contents), for which I want to execute the same test on many input (*.foo) files. IOW, my test looks like:
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def list_testcases(self):
# essentially os.listdir('tests/') and filter *.foo files.
def test_all(self):
for f in self.list_testcases():
one_file(f)
My current code uses
unittest from
Python's standard library, i.e. one_file uses self.assert...(...)
statements to check whether the test passes.
This works, in the sense that I do get a program which succeeds/fails
when my code is OK/buggy, but I'm loosing a lot of the advantages of
the testing framework:
I don't get relevant reporting like "X failures out of Y tests" nor
the list of passed/failed tests. (I'm planning to use such system
not only to test my own development but also to grade student's code
as a teacher, so reporting is important for me)
I don't get test independence. The second test runs on the
environment left by the first, and so on. The first failure stops
the testsuite: testcases coming after a failure are not ran at all.
I get the feeling that I'm abusing my test framework: there's only
one test function so automatic test discovery of unittest sounds
overkill for example. The same code could (should?) be written in
plain Python with a basic assert.
An obvious alternative is to change my code to something like
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def test_file1(self):
one_file("first-testcase.foo")
def test_file2(self):
one_file("second-testcase.foo")
Then I get all the advantages of unittest back, but:
It's a lot more code to write.
It's easy to "forget" a testcase, i.e. create a test file in
tests/ and forget to add it to the Python test.
I can imagine a solution where I would generate one method per testcase dynamically (along the lines of setattr(self, 'test_file' + str(n), ...)), to generate the code for the second solution without having to write it by hand. But that sounds really overkill for a use-case which doesn't seem so complex.
How could I get the best of both, i.e.
automatic testcase discovery (list tests/*.foo files), test
independence and proper reporting?
If you can use pytest as your test runner, then this is actually pretty straightforward using the parametrize decorator:
import pytest, glob
all_files = glob.glob('some/path/*.foo')
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
# do the actual test
This will also automatically name the tests in a useful way, so that you can see which files have failed:
$ py.test
================================== test session starts ===================================
platform darwin -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
[...]
======================================== FAILURES ========================================
_____________________________ test_one_file[some/path/a.foo] _____________________________
filename = 'some/path/a.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
> assert False
E assert False
test_it.py:7: AssertionError
_____________________________ test_one_file[some/path/b.foo] _____________________________
filename = 'some/path/b.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
[...]
Here is a solution, although it might be considered not very beautiful... The idea is to dynamically create new functions, add them to the test class, and use the function names as arguments (e.g., filenames):
# import
import unittest
# test class
class Test(unittest.TestCase):
# example test case
def test_default(self):
print('test_default')
self.assertEqual(2,2)
# set string for creating new function
func_string="""def test(cls):
# get function name and use it to pass information
filename = inspect.stack()[0][3]
# print function name for demonstration purposes
print(filename)
# dummy test for demonstration purposes
cls.assertEqual(type(filename),str)"""
# add new test for each item in list
for f in ['test_bla','test_blu','test_bli']:
# set name of new function
name=func_string.replace('test',f)
# create new function
exec(name)
# add new function to test class
setattr(Test, f, eval(f))
if __name__ == "__main__":
unittest.main()
This correctly runs all four tests and returns:
> test_bla
> test_bli
> test_blu
> test_default
> Ran 4 tests in 0.040s
> OK
Related
I have a list of test cases in different modules (just showing glimpse here, but in actual the count is around 10).
I want pytest to execute these modules in order, First module-a.py, next module-b.py and so on...
However, I am aware of ordering the test cases within module which is working perfectly fine.
Our overall application is a kind of pipeline where module-a output is consumed by module-b and so on..
We want to test it completely end-end in defined order of modules.
module-a.py
-----------
import pytest
#pytest.mark.run(order=2)
def test_three():
assert True
#pytest.mark.run(order=1)
def test_four():
assert True
module-b.py
-----------
#pytest.mark.run(order=2)
def test_two():
assert True
#pytest.mark.run(order=1)
def test_one():
assert True
From above code, I want
module-a test_four, test_three
module-b test_one, test_two
to be executed in order.
Currently we are running each of the module with --cov-append, something like below and generating final coverage.
pytest module-a.py
pytest --cov-append module-b.py
Can any one please help with better approach or better method.
I'm currently working my way through Python Crash Course and have run into a problem during the testing chapter. I've went through with a comb, explained it to my imaginary rubber duck, and can't see at all where I'm going wrong.
Running the test file gives me "Run 0 tests in 0.000s" but no errors that I can see.
The first block is from my file "survey.py", and the second is the test file "testSurvey.py"
Any help is hugely appreciated.
class AnonymousSurvey():
def __init__(self, question):
self.question = question
self.responses = []
def showQuestion(self):
print(self.question)
def storeResponse(self, newResponse):
self.responses.append(newResponse)
def showResults(self):
print("The survey results are")
for response in self.responses:
print("-- " + response)
import unittest
from survey import AnonymousSurvey
class TestAnonymousSurvey(unittest.TestCase):
def TestStoreSingleResponse(self):
question = "What is your favourite language?"
mySurvey = AnonymousSurvey(question)
responses = ["English", "Latin", "Franglais"]
for response in responses:
mySurvey.storeResponse(response)
for response in responses:
self.assertIn(response, mySurvey.responses)
unittest.main()
Your test methods should start with keyword 'test'
like 'test_storing_single_response()'
Pytest identifies method starting with 'test' as a testcase
check out pytest good practice
Conventions for Python test discovery
pytest implements the following standard test discovery:
- If no arguments are specified then collection starts from testpaths
(if configured) or the current directory. Alternatively, command line
arguments can be used in any combination of directories, file names
or node ids.
- Recurse into directories, unless they match norecursedirs.
- In those directories, search for test_*.py or *_test.py files,
imported by their test package name.
- From those files, collect test items:
- test prefixed test functions or methods outside of class
- test prefixed test functions or methods inside Test prefixed test
classes (without an __init__ method)
Please bear with me while I try to explain my predicament, I'm still a Python novice and so my terminology may not be correct. Also I'm sorry for the inevitable long-windedness of this post, but I'll try to expalin in as much relevant detail as possible.
A quick rundown:
I'm currently developing a suite of Selenium tests for a set of websites that are essentially the same in functionality, using py.test
Tests results are uploaded to TestRail, using the pytest plugin pytest-testrail.
Tests are tagged with the decorator #pytestrail.case(id) with a unique case ID
A typical test of mine looks like this:
#pytestrail.case('C100123') # associates the function with the relevant TR case
#pytest.mark.usefixtures()
def test_login():
# test code goes here
As I mentioned before, I'm aiming to create one set of code that handles a number of our websites with (virtually) identical functionality, so a hardcoded decorator in the example above won't work.
I tried a data driven approach with a csv and a list of the tests and their case IDs in TestRail.
Example:
website1.csv:
Case ID | Test name
C100123 | test_login
website2.csv:
Case ID | Test name
C222123 | test_login
The code I wrote would use the inspect module to find the name of the test running, find the relevant test ID and put that into a variable called test_id:
import csv
import inspect
class trp(object):
def __init__(self):
pass
with open(testcsv) as f: # testcsv could be website1.csv or website2.csv
reader = csv.reader(f)
next(reader) # skip header
tests = [r for r in reader]
def gettestcase(self):
self.current_test = inspect.stack()[3][3]
for row in trp.tests:
if self.current_test == row[2]:
self.test_id = (row[0])
print(self.test_id)
return self.test_id, self.current_test
def gettestid(self):
self.gettestcase()
The idea was that the decorator would change dynamically based on the csv that I was using at the time.
#pytestrail.case(test_id) # now a variable
#pytest.mark.usefixtures()
def test_login():
trp.gettestid()
# test code goes here
So if I ran test_login for website1, the decorator would look like:
#pytestrail.case('C100123')
and if I ran test_login for website2 the decorator would be:
#pytestrail.case('C222123')
I felt mighty proud of coming up with this solution by myself and tried it out...it didn't work. While the code does work by itself, I would get an exception because test_id is undefined (I understand why - gettestcase is executed after the decorator, so of course it would crash.
The only other way I can handle this is to apply the csv and testIDs before any test code is executed. My question is - how would I know how to associate the tests with their test IDs? What would an elegant, minimal solution to this be?
Sorry for the long winded question. I'll be watching closely to answer any questions if you need more explanation.
pytest is very good at doing all kind of metaprogramming stuff for the tests. If I understand your question correctly, the code below will do the dynamic tests marking with pytestrail.case marker. In the project root dir, create a file named conftest.py and place this code in it:
import csv
from pytest_testrail.plugin import pytestrail
with open('website1.csv') as f:
reader = csv.reader(f)
next(reader)
tests = [r for r in reader]
def pytest_collection_modifyitems(items):
for item in items:
for testid, testname in tests:
if item.name == testname:
item.add_marker(pytestrail.case(testid))
Now you don't need to mark the test with #pytestrail.case()at all - just write the rest of code and pytest will take care of the marking:
def test_login():
assert True
When pytest starts, the code above will read website1.csv and store the test IDs and names just as you did in your code. Before the test run starts, pytest_collection_modifyitems hook will execute, analyzing the collected tests - if a test has the same name as in csv file, pytest will add the pytestrail.case marker with the test ID to it.
I believe the reason this isn't working as you would expect has to do with how python reads and executes files. When python starts executing it reads in the linked python file(s) and executes each line one-by-one, in turn. For things at the 'root' indention level (function/class definitions, decorators, variable assignments, etc) this means they get run exactly one time as they are loaded in, and never again. In your case, the python interpreter reads in the pytest-testrail decorator, then the pytest decorator, and finally the function definition, executing each one once, ever.
(Side note, this is why you should never use mutable objects as function argument defaults: Common Gotchas)
Given that you want to first deduce the current test name, then associate that with a test case ID, and finally use that ID with the decorator, I'm not sure that is possible with pytest-testrail's current functionality. At least, not possible without some esoteric and difficult to debug/maintain hack using descriptors or the like.
I think you realistically have one option: use a different TestRail client and update your pytest structure to use the new client. Two python clients I can recommend are testrail-python and TRAW (TestRail Api Wrapper)(*)
It will take more work on your part to create the fixtures for starting a run, updating results, and closing the run, but I think in the end you will have a more portable suite of tests and better results reporting.
(*) full disclosure: I am the creator/maintainer of TRAW, and also made significant contributions to testrail-python
This is my tests class, in mymodule.foo:
class Some TestClass(TestCase):
def setUpClass(cls):
# Do the setup for my tests
def test_Something(self)
# Test something
def test_AnotherThing(self)
# Test another thing
def test_DifferentStuff(self)
# Test another thing
I'm running the tests from Python with the following lines:
tests_to_run = ['mymodule.foo:test_AnotherThing', 'mymodule.foo:test_DifferentStuff']
result = nose.run(defaultTest= tests_to_run)
(This is obviously a bit more complicated and there's some logic to pick what tests I want to run)
Nose will run just the selected tests, as expected, but the setUpClass will run once for every test in tests_to_run. Is there any way to avoid this?
What I'm trying to achieve is to be able to run some dynamic set of tests while using nose in a Python script (not from the command line)
As #jonrsharpe mentioned, setupModule is what I was after: it will run just once per the whole module where my tests reside.
Say I want to grade some student python code using tests, something like (this is pseudo-code I wish I could write):
code = __import__("student_code") # Import the code to be tested
grade = 100
for test in all_tests(): # Loop over the tests that were gathered
good = perform(test, code) # Perform the test individually on the code
if not good: # Do something if the code gives the wrong result
grade -= 1
For that, I would like to use pytest (easy to write good tests), but there are many things I don't know how to do:
how to run tests on external code? (here the code imported from student's code)
how to list all the tests available? (here all_tests())
how to run them individually on code? (here perform(test, code))
I couldn't find anything related to this user-case (pytest.main() does not seem to do the trick anyhow...)
I hope you see my point, cheers!
EDIT
I finally found how to perform my 1st point (apply tests on external code). In the repository where you want to perform the tests, generate a conftest.py file with:
import imp # Import standard library
import pytest
def pytest_addoption(parser):
"""Add a custom command-line option to py.test."""
parser.addoption("--module", help="Code file to be tested.")
#pytest.fixture(scope='session')
def module(request):
"""Import code specified with the command-line custom option '--module'."""
codename = request.config.getoption("--module")
# Import module (standard __import__ does not support import by filename)
try:
code = imp.load_source('code', codename)
except Exception as err:
print "ERROR while importing {!r}".format(codename)
raise
return code
Then, gather your tests in a tests.py file, using the module fixture:
def test_sample(module):
assert module.add(1, 2) == 3
Finally, run the tests with py.test tests.py --module student.py.
I'm still working on points 2 and 3.
EDIT 2
I uploaded my (incomplete) take at this question:
https://gitlab.in2p3.fr/ycopin/pyTestExam
Help & contributions are welcome!
Very cool and interesting project. It's difficult to answer without knowing more.
Basically you should be able to do this by writing a custom plugin. Probably something you could place in a conftest.py in a test or project folder with your unittest subfolder and a subfolder for each student.
Probably would want to write two plugins:
One to allow weighting of tests (e.g. test_foo_10 and and test_bar_5) and calculation of final grade (e.g. 490/520) (teamcity-messages is an example that uses the same hooks)
Another to allow distribution of test to separate processes. (xdist as an example)
I know this is not a very complete answer but I wanted to at least point out the last point. Since there would be a very high probability that students would be using overlapping module names, they would clash in a pytest world where tests are collected first and then run in a process that would attempt to not re-import modules with a common name.
Even if you attempt to control for that, you will eventually have a student manipulate the global namespace in a sloppy way that could cause another students code to fail. You will, for that reason, need either a bash script to run each students file or a plugin that would run them in separate processes.
Make this use case a graded take-home exam and see what they come up with :-) ... naaah ... but you could ;-)
I come up with something like this (whre it is assument that the sum function is the student code):
import unittest
score = 0
class TestAndGrade(unittest.TestCase):
def test_correctness(self):
self.assertEqual(sum([2,2]), 4)
global score; score += 6 # Increase score
def test_edge_cases(self):
with self.assertRaises(Exception):
sum([2,'hello'])
global score; score += 1 # Increase score
# Print the score
#classmethod
def tearDownClass(cls):
global score
print('\n\n-------------')
print('| Score: {} |'.format(score))
print('-------------\n')
# Run the tests
unittest.main()