pytest: custom mark with arguments - python

I would like to mark tests in the following fashion:
#pytest.mark.expectedruntime(100)
def test_function():
blahblah()
And then run pytest with, for example, -m not expectedruntime>50 (or some other syntax)
So that only tests with an expected run time of 50 or less would be run, or tests without that mark.
Is there a way to do this with native pytest/with a plugin? If not, what would I need to do in order to accomplish this?
https://docs.pytest.org/en/latest/writing_plugins.html mentions a custom mark called "mark_with" which consumes arguments but doesn't mention how to actually use those arguments.

I hope this example will help http://doc.pytest.org/en/latest/example/markers.html#custom-marker-and-command-line-option-to-control-test-runs

Related

Using different parameters for different tests in Pytest

I am working with embedded firmware testing using Python 3.9 and Pytest. We are working with multiple devices, and run different tests run on different devices. It would be very nice to be able to reuse test fixtures for each device - however I am running into difficulty parameterizing test fixtures.
Currently I have something like this:
#pytest.fixture(scope="function", params=["device1", "device2"])
def connect(request):
jlink.connect(request.param)
#pytest.mark.device1
def test_device1(connect):
# test code
#pytest.mark.device2
def test_device2(connect)
# test code
The behavior I would like is that param "device1" is used for test_device1, and param "device2" is used for test_device2. But the default Pytest behavior is to use all params for all tests, and I am struggling to find a way around this. Is there a way to specify which params to use for certain markers?
I should also mention, I am an embedded C developer and have been learning Python as I've worked on this project, so my general Python/Pytest knowledge may be a bit lacking.
EDIT: I think I found a workaround, but I'm not super happy with it. I have separated the tests for each device into different folders, and in each folder, have a device_cfg.json file. The test fixture opens the cfg file to know which device to connect to.
EDIT2: This doesn't even work because of Pytest scoping...
#pytest.mark.parametrize: parametrizing test functions
The builtin pytest.mark.parametrize decorator enables parametrization of arguments for a test function. Here is a typical example of a test function that implements checking that a certain input leads to an expected output:
# content of test_expectation.py
import pytest
#pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
assert eval(test_input) == expected
https://docs.pytest.org/en/7.1.x/how-to/parametrize.html
Hope it'll help you

Why do I only get a function as a return value by using a fixture (from pytest) in a test script?

I want to write test functions for my code and decided to use pytest. I had a look into this tutorial: https://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest
My real code involves another script, written by me, so I made an example, which also creates the same problem, but does not rely on my other code.
#pytest.fixture()
def example():
value = 10
return value
def test_value(example):
print(example)
assert(example == 10)
test_value(example)
When I run my script with this toy example, the print returns a function:
<function example at 0x0391E540>
and the assertion fails.
If I try to call example() with the parenthesis, I get this:
Failed: Fixture "example_chunks" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I am sure, I am missing something important here, but searching google did not help me, which is why I hope somebody here can provide some assistance.
Remove this line from your script
test_value(example)
Run your script file with pytest file.py
Fixtures will be automatically resolved by pytest
In your example you run code directly and fixtures are just simple functions

Python unittest, running only the tests that failed

I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?
(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something
Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.
Just run the test with --last-failed option (you might need pytest)

Can't get Nose to honor the attributes I set on my tests

I'm using Django 1.7 with django-nose 1.4 and nose 1.3.6.
According to the documentation, I should be able to select tests to be run by using attributes. I have a test set like this:
from nose.plugins.attrib import attr
from django_webtest import TransactionWebTest
#attr(isolation="menu")
class MenuTestCase(TransactionWebTest):
def test_home(self):
pass
When I try to run my tests with:
./manage.py test -a isolation
Nose eliminates all tests from the run. In other words, it does not run any test. Note that when I do not use -a, all the tests run fine. I've also tried:
-a=isolation
-a isolation=menu
-a=isolation=menu
-a '!isolation'
The last one should select almost all of my test suite since the isolation attribute is used only on one class but it does not select anything! I'm starting to think I just don't understand how the whole attributes system works.
It is unclear to me what causes the problem. It probably has to do with how Django passes the command line arguments to django-nose, which then passes them to nose. At any rate, using the long form of the command line arguments solves the problem:
$ ./manage.py test --attr=isolation
and similarly:
--attr=isolation=menu
--attr='!isolation' (with the single quotes to prevent the shell form interpreting !)
--eval-attr=isolation
--eval-attr='isolation=="menu"' (the single quotes prevent the shell from removing the double quotes)
etc...

How to list available tests with python?

How to just list all discovered tests?
I found this command:
python3.4 -m unittest discover -s .
But it's not exactly what I want, because the above command executes tests. I mean let's have a project with a lot of tests. Execution time is a few minutes. This force me to wait until tests are finished.
What I want is something like this (above command's output)
test_choice (test.TestSequenceFunctions) ... ok
test_sample (test.TestSequenceFunctions) ... ok
test_shuffle (test.TestSequenceFunctions) ... ok
or even better, something more like this (after editing above):
test.TestSequenceFunctions.test_choice
test.TestSequenceFunctions.test_sample
test.TestSequenceFunctions.test_shuffle
but without execution, only printing tests "paths" for copy&paste purpose.
Command line command discover is implemented using unittest.TestLoader. Here's the somewhat elegant solution
import unittest
def print_suite(suite):
if hasattr(suite, '__iter__'):
for x in suite:
print_suite(x)
else:
print(suite)
print_suite(unittest.defaultTestLoader.discover('.'))
Running example:
In [5]: print_suite(unittest.defaultTestLoader.discover('.'))
test_accounts (tests.TestAccounts)
test_counters (tests.TestAccounts)
# More of this ...
test_full (tests.TestImages)
This works because TestLoader.discover returns TestSuite objects, that implement __iter__ method and therefore are iterable.
You could do something like:
from your_tests import TestSequenceFunctions
print('\n'.join([f.__name__ for f in dir(TestSequenceFunctions) if f.__name__.startswith('test_')]))
I'm not sure if there is an exposed method for this via unittest.main.

Categories

Resources