pytest: Using dependency injection with decorators - python

At work we use a decorator #rollback on selected test functions which will rollback any db changes made during that test.
I've recently started using pytest's dependency injection for a few use cases, both with #pytest.mark.parametrize(...) and the pytest_funcarg__XXX hook. Unfortunately, this clashes with our decorated test functions.
How can I make this work?
My first idea was using a custom marker, say #pytest.mark.rollback and do something like:
def rollback(meth):
"""Original rollback function"""
...
def pytest_runtest_setup(item):
if not isinstance(item, pytest.Function):
return
if hasattr(item.obj, 'rollback'):
item = rollback(item)
Can Would an approach like this actually work?

Something like this should work fine, yes. Seems like you are using global state to manage your database, here, right? You might want to checkout the docs of the upcoming 2.3 release which also has a "transact" example further down this page:
http://pytest.org/dev/fixture.html
The release is due any time now and you can install the candidate with "pip install -i http://pypi.testrun.org -U pytest".

Related

Why do I only get a function as a return value by using a fixture (from pytest) in a test script?

I want to write test functions for my code and decided to use pytest. I had a look into this tutorial: https://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest
My real code involves another script, written by me, so I made an example, which also creates the same problem, but does not rely on my other code.
#pytest.fixture()
def example():
value = 10
return value
def test_value(example):
print(example)
assert(example == 10)
test_value(example)
When I run my script with this toy example, the print returns a function:
<function example at 0x0391E540>
and the assertion fails.
If I try to call example() with the parenthesis, I get this:
Failed: Fixture "example_chunks" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I am sure, I am missing something important here, but searching google did not help me, which is why I hope somebody here can provide some assistance.
Remove this line from your script
test_value(example)
Run your script file with pytest file.py
Fixtures will be automatically resolved by pytest
In your example you run code directly and fixtures are just simple functions

pytest: custom mark with arguments

I would like to mark tests in the following fashion:
#pytest.mark.expectedruntime(100)
def test_function():
blahblah()
And then run pytest with, for example, -m not expectedruntime>50 (or some other syntax)
So that only tests with an expected run time of 50 or less would be run, or tests without that mark.
Is there a way to do this with native pytest/with a plugin? If not, what would I need to do in order to accomplish this?
https://docs.pytest.org/en/latest/writing_plugins.html mentions a custom mark called "mark_with" which consumes arguments but doesn't mention how to actually use those arguments.
I hope this example will help http://doc.pytest.org/en/latest/example/markers.html#custom-marker-and-command-line-option-to-control-test-runs

py.test 2.3.5 does not run finalizer after fixture failure

i was trying py.test for its claimed better support than unittest for module and session fixtures, but i stumbled on a, at least for me, bizarre behavior.
Consider the following code (don't tell me it's dumb, i know it, it's just a quick and dirty hack to replicate the behavior) (i'm running on Python 2.7.5 x86 on windows 7)
import os
import shutil
import pytest
test_work_dir = 'test-work-dir'
tmp = os.environ['tmp']
count = 0
#pytest.fixture(scope='module')
def work_path(request):
global count
count += 1
print('test: ' + str(count))
test_work_path = os.path.join(tmp, test_work_dir)
def cleanup():
print('cleanup: ' + str(count))
if os.path.isdir(test_work_path):
shutil.rmtree(test_work_path)
request.addfinalizer(cleanup)
os.makedirs(test_work_path)
return test_work_path
def test_1(work_path):
assert os.path.isdir(work_path)
def test_2(work_path):
assert os.path.isdir(work_path)
def test_3(work_path):
assert os.path.isdir(work_path)
if __name__ == "__main__":
pytest.main(['-s', '-v', __file__])
If test_work_dir does not exist, then i obtain the expected behavior:
platform win32 -- Python 2.7.5 -- pytest-2.3.5 -- C:\Programs\Python\27-envs\common\Scripts\python.exe
collecting ... collected 4 items
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
cleanup: 1
PASSED
py_test.py:38: test_2 PASSED
py_test.py:42: test_3 PASSEDcleanup: 1
fixture is called once for the module and cleanup is called once at the end of tests.
Then if test_work_dir exists i would expect something similar to unittest, that fixture is called once, it fails with OSError, tests that need it are not run, cleanup is called once and world peace is established again.
But... here's what i see:
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
ERROR
py_test.py:38: test_2 test: 2
ERROR
py_test.py:42: test_3 test: 3
ERROR
Despite the failure of the fixture all the tests are run, the fixture that is supposed to be scope='module' is called once for each test and finalizer in never called!
I know that exceptions in fixtures are not good policy, but the real fixtures are complex and i'd rather avoid filling them with try blocks if i can count on the execution of each finalizer set till the point of failure. I don't want to go hunting for test artifacts after a failure.
And moreover trying to run the tests when not all of the fixtures they need are in place makes no sense and can make them at best erratic.
Is this the intended behavior of py.test in case of failure in a fixture?
Thanks, Gabriele
Three issues here:
you should register the finalizer after you performed the action that you want to be undone. So first call makedirs() and then register the finalizer. That's a general issue with fixtures because usually teardown code can only run if there was something successfully created
pytest-2.3.5 has a bug in that it will not call finalizers if the fixture function fails. I've just fixed it and you can install the 2.4.0.dev7 (or higher) version with pip install -i http://pypi.testrun.org -U pytest. It ensures the fixture finalizers are called even if the fixture function partially fails. Actually a bit surprising this hasn't popped up earlier but i guess people, including me, usually just go ahead and fix the fixtures instead of diving into what's happening specifically. So thanks for posting here!
if a module-scoped fixture function fails, the next test needing that fixture will still trigger execution of the fixture function again, as it might have been an intermittent failure. It stands to reason that pytest should memorize the failure for the given scope and not retry execution. If you think so, please open an issue, linking to this stackoverflow discussion.
thanks, holger

How can I see normal print output created during pytest run?

Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run?
The -s switch disables per-test capturing (only if a test fails).
-s is equivalent to --capture=no.
pytest captures the stdout from individual tests and displays them only on certain conditions, along with the summary of the tests it prints by default.
Extra summary info can be shown using the '-r' option:
pytest -rP
shows the captured output of passed tests.
pytest -rx
shows the captured output of failed tests (default behaviour).
The formatting of the output is prettier with -r than with -s.
When running the test use the -s option. All print statements in exampletest.py would get printed on the console when test is run.
py.test exampletest.py -s
In an upvoted comment to the accepted answer, Joe asks:
Is there any way to print to the console AND capture the output so that it shows in the junit report?
In UNIX, this is commonly referred to as teeing. Ideally, teeing rather than capturing would be the py.test default. Non-ideally, neither py.test nor any existing third-party py.test plugin (...that I know of, anyway) supports teeing – despite Python trivially supporting teeing out-of-the-box.
Monkey-patching py.test to do anything unsupported is non-trivial. Why? Because:
Most py.test functionality is locked behind a private _pytest package not intended to be externally imported. Attempting to do so without knowing what you're doing typically results in the public pytest package raising obscure exceptions at runtime. Thanks alot, py.test. Really robust architecture you got there.
Even when you do figure out how to monkey-patch the private _pytest API in a safe manner, you have to do so before running the public pytest package run by the external py.test command. You cannot do this in a plugin (e.g., a top-level conftest module in your test suite). By the time py.test lazily gets around to dynamically importing your plugin, any py.test class you wanted to monkey-patch has long since been instantiated – and you do not have access to that instance. This implies that, if you want your monkey-patch to be meaningfully applied, you can no longer safely run the external py.test command. Instead, you have to wrap the running of that command with a custom setuptools test command that (in order):
Monkey-patches the private _pytest API.
Calls the public pytest.main() function to run the py.test command.
This answer monkey-patches py.test's -s and --capture=no options to capture stderr but not stdout. By default, these options capture neither stderr nor stdout. This isn't quite teeing, of course. But every great journey begins with a tedious prequel everyone forgets in five years.
Why do this? I shall now tell you. My py.test-driven test suite contains slow functional tests. Displaying the stdout of these tests is helpful and reassuring, preventing leycec from reaching for killall -9 py.test when yet another long-running functional test fails to do anything for weeks on end. Displaying the stderr of these tests, however, prevents py.test from reporting exception tracebacks on test failures. Which is completely unhelpful. Hence, we coerce py.test to capture stderr but not stdout.
Before we get to it, this answer assumes you already have a custom setuptools test command invoking py.test. If you don't, see the Manual Integration subsection of py.test's well-written Good Practices page.
Do not install pytest-runner, a third-party setuptools plugin providing a custom setuptools test command also invoking py.test. If pytest-runner is already installed, you'll probably need to uninstall that pip3 package and then adopt the manual approach linked to above.
Assuming you followed the instructions in Manual Integration highlighted above, your codebase should now contain a PyTest.run_tests() method. Modify this method to resemble:
class PyTest(TestCommand):
.
.
.
def run_tests(self):
# Import the public "pytest" package *BEFORE* the private "_pytest"
# package. While importation order is typically ignorable, imports can
# technically have side effects. Tragicomically, that is the case here.
# Importing the public "pytest" package establishes runtime
# configuration required by submodules of the private "_pytest" package.
# The former *MUST* always be imported before the latter. Failing to do
# so raises obtuse exceptions at runtime... which is bad.
import pytest
from _pytest.capture import CaptureManager, FDCapture, MultiCapture
# If the private method to be monkey-patched no longer exists, py.test
# is either broken or unsupported. In either case, raise an exception.
if not hasattr(CaptureManager, '_getcapture'):
from distutils.errors import DistutilsClassError
raise DistutilsClassError(
'Class "pytest.capture.CaptureManager" method _getcapture() '
'not found. The current version of py.test is either '
'broken (unlikely) or unsupported (likely).'
)
# Old method to be monkey-patched.
_getcapture_old = CaptureManager._getcapture
# New method applying this monkey-patch. Note the use of:
#
# * "out=False", *NOT* capturing stdout.
# * "err=True", capturing stderr.
def _getcapture_new(self, method):
if method == "no":
return MultiCapture(
out=False, err=True, in_=False, Capture=FDCapture)
else:
return _getcapture_old(self, method)
# Replace the old with the new method.
CaptureManager._getcapture = _getcapture_new
# Run py.test with all passed arguments.
errno = pytest.main(self.pytest_args)
sys.exit(errno)
To enable this monkey-patch, run py.test as follows:
python setup.py test -a "-s"
Stderr but not stdout will now be captured. Nifty!
Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time.
According to pytest documentation, version 3 of pytest can temporary disable capture in a test:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
pytest --capture=tee-sys was recently added (v5.4.0). You can capture as well as see the output on stdout/err.
Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing'
You can also enable live-logging by setting the following in pytest.ini or tox.ini in your project root.
[pytest]
log_cli = True
Or specify it directly on cli
pytest -o log_cli=True
pytest test_name.py -v -s
Simple!
I would suggest using -h command. There're quite interesting commands might be used for.
but, for this particular case: -s shortcut for --capture=no. is enough
pytest <test_file.py> -s
If you are using logging, you need to specify to turn on logging output in addition to -s for generic stdout. Based on Logging within pytest tests, I am using:
pytest --log-cli-level=DEBUG -s my_directory/
If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.
If anyone wants to run tests from code with output:
if __name__ == '__main__':
pytest.main(['--capture=no'])
The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
def test_print_something_even_if_the_test_pass(self, capsys):
text_to_be_printed = "Print me when the test pass."
print(text_to_be_printed)
p_t = capsys.readouterr()
sys.stdout.write(p_t.out)
# the two rows above will print the text even if the test pass.
Here is the result:
test_print_something_even_if_the_test_pass PASSED [100%]Print me when the test pass.

How do I run tests in a product being developed in Plone 4?

I am developing a product for Plone 4, inside the zeocluster/src/... directory of an installation, and I have an automated test. Unfortunately, when I run 'bin/client1 shell' and then (path to Plone's Python)/bin/python setup.py test, it fails. The error is
File "buildout-cache/eggs/Products.PloneTestCase-0.9.12-py2.6.egg/Products/PloneTestCase/PloneTestCase.py", line 109, in getPortal
return getattr(self.app, portal_name)
AttributeError: plone
What is the correct way to run automated tests in Plone 4?
In setup.py,
...
test_suite = "nose.collector"
...
The failing test:
import unittest
from Products.PloneTestCase import PloneTestCase as ptc
ptc.setupPloneSite()
class NullTest(ptc.PloneTestCase):
def testTest(self):
pass
def test_suite():
return unittest.TestSuite([
unittest.makeSuite(NullTest)
])
if __name__ == '__main__':
unittest.main(defaultTest='test_suite')
Best is to edit your buildout.cfg and add a part that creates a 'bin/test' script. Something like this:
[test]
recipe = zc.recipe.testrunner
# Note that only tests for packages that are explicitly named (instead
# of 'implicitly' added to the instance as dependency) can be found.
eggs =
# Use the name of the plone.recipe.zope2instance part here, might be zeoclient instead:
${instance:eggs}
defaults = ['--exit-with-status', '--auto-color', '--auto-progress']
Do not forget to add 'test' to the 'parts' in the main 'buildout' section of your buildout.cfg. Run bin/buildout and you should now have a bin/test script. See the PyPI page of this recipe for more options and explanation.
Now running 'bin/test' should run all tests for all eggs explicitly named in the instance part. This may run far too many tests. Use 'bin/test -s your.package' to run only the tests for your.package, provided your.package is part of the eggs in the instance.
Note that instead of the 'pass' that you now have in the test, it is better to add a test that you know for certain will fail, like 'self.assertEqual(True, False)'. Then it is easier to see that your test indeed has been run and that it fails as expected.
When I have a simple buildout for testing one specific package that I am developing, I usually extend one of the configs in the plonetest buildout, like this one for Plone 4; you can have a look at that for inspiration.
You need to use zope.testrunner and zope.testing to run your tests. Plone tests cannot be run via nose and we don't support the 'test_suite' argument to setup.py as invented by setuptools.
The other answers explain how to get a test runner script set up.
ptc.setupPloneSite() registers a deferred function that will be actually run when the zope.testrunner layer is set up. I'm guessing you're not using zope.testrunner and thus the layer isn't being setup so the Plone site is never created, hence the AttributeError when it tries subsequently to get the portal object.

Categories

Resources