Can tests with pytest fixtures be run interactively? - python

I have some tests written using pytest and fixtures, e.g.:
class TestThing:
#pytest.fixture()
def temp_dir(self, request):
my_temp_dir = tempfile.mkdtemp()
def fin():
shutil.rmtree(my_temp_dir)
request.addfinalizer(fin)
return my_temp_dir
def test_something(self, temp_dir)
with open(os.path.join(temp_dir, 'test.txt'), 'w') as f:
f.write('test')
This works fine when the tests are invoked from the shell, e.g.
$ py.test
but I don't know how to run them from within a python/ipython session; trying e.g.
tt = TestThing()
tt.test_something(tt.temp_dir())
fails because temp_dir requires a request object to be passed on. So, how does one invoke a fixture with a request object injected?

Yes. You don't have to manually assemble any test fixtures or anything like that. Everything runs just like calling pytest in the project directory.
Method1:
This is the best method because it gives you access to the debugger if your test fails
In ipython shell use:
**ipython**> run -m pytest prj/
This will run all your tests in the prj/tests directory.
This will give you access to the debugger, or allow you to set breakpoints if you have a
import ipdb; ipdb.set_trace() in your program (https://docs.pytest.org/en/latest/usage.html#setting-breakpoints).
Method2:
Use !pytest while in the test directory. This wont give you access to the debugger. However, if you use
**ipython**> !pytest --pdb
If you have a test failure, it will drop you into the debugger (subshell), so you can run your post-mortem analysis (https://docs.pytest.org/en/latest/usage.html#dropping-to-pdb-python-debugger-on-failures)
Using these methods you can even run individual modules/test_fuctions/TestClasses in ipython using (https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)
**ipython**> run -m pytest prj/tests/test_module1.py::TestClass1::test_function1

You can bypass the pytest.fixture decorator and directly call the wrapped test function.
tmp = tt.temp_dir.__pytest_wrapped__.obj(request=...)
Accessing internals, it's bad, but when you need it...

The best method I have which is far from ideal is to just %run the test file, then manually assemble the fixtures, then simply call the tests. The problem with this is tracking down the modules where the default fixtures are defined, and then calling them in their order of dependencies.

you can use two cells for this:
first:
def test_something():
assert True
second:
from tempfile import mktemp
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(_i) # write last cell input
!pytest $test_file
also u can do this on one cell like this but you won't have code highlighting:
from tempfile import mktemp
test_code = """
def test_something():
assert True
"""
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(test_code)
!pytest $test_file

The simple answer is that you don't want to run py.test interactively from python. Most people set up some integration with their text editor or IDE to run py.test and parse it's output. But really it's a command line tool and that is how it should be used.
As a sidenode you may want to check out the built-in tmpdir fixture: http://pytest.org/latest/tmpdir.html Because it seems like you're re-inventing this.

Related

Why do I only get a function as a return value by using a fixture (from pytest) in a test script?

I want to write test functions for my code and decided to use pytest. I had a look into this tutorial: https://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest
My real code involves another script, written by me, so I made an example, which also creates the same problem, but does not rely on my other code.
#pytest.fixture()
def example():
value = 10
return value
def test_value(example):
print(example)
assert(example == 10)
test_value(example)
When I run my script with this toy example, the print returns a function:
<function example at 0x0391E540>
and the assertion fails.
If I try to call example() with the parenthesis, I get this:
Failed: Fixture "example_chunks" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I am sure, I am missing something important here, but searching google did not help me, which is why I hope somebody here can provide some assistance.
Remove this line from your script
test_value(example)
Run your script file with pytest file.py
Fixtures will be automatically resolved by pytest
In your example you run code directly and fixtures are just simple functions

How to run test case marked `#unittest.skip` in Python?

Assuming the following test suite:
# test_module.py
import unittest
class Tests(unittest.TestCase):
#unittest.skip
def test_1(self):
print("This should run only if explicitly asked to but not by default")
# assume many other test cases and methods with or without the skip marker
When invoking the unittest library via python -m unittest are there any arguments I can pass to it actually run and not skip Tests.test_1 without modifying the test code and running any other skipped tests?
python -m unittest test_module.Tests.test_1 correctly selects this as the only test to run, but it still skips it.
If there is no way to do it without modifying the test code, what is the most idiomatic change I can make to conditionally undo the #unittest.skip and run one specific test case test case?
In all cases, I still want python -m unittest discover (or any other invocation that doesn't explicitly turn on the test) to skip the test.
If you want to skip some expensive tests you can use a conditional skip together with a custom environment variable:
#skipIf(int(os.getenv('TEST_LEVEL', 0)) < 1)
def expensive_test(self):
...
Then you can include this test by specifying the corresponding environment variable:
TEST_LEVEL=1 python -m unittest discover
TEST_LEVEL=1 python -m unittest test_module.Tests.test_1
If you want to skip a test because you expect it to fail, you can use the dedicated expectedFailure decorator.
By the way, pytest has a dedicated decorator for marking slow tests.

How to execute a python test using httpretty/sure

I'm new to python tests so don't hesitate to provide any obvious information.
Basically I want to do some RESTful tests using python, and found the httpretty and sure libraries which look really nice.
I have a python file containing:
#!/usr/bin/python
from sure import expect
import requests, httpretty
#httpretty.activate
def RestTest():
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"}
Which is basically the same as the example code provided at https://github.com/gabrielfalcao/HTTPretty
My question is; how do I simply run this test to see it either passing or failing? I tried just executing it using ./pythonFile but that doesn't work.
If your test is implemented as a Python function, then of course simply trying to execute the file isn't going to run the test: nothing in that file actually calls RestTest.
You need some sort of test framework that will call your tests and collate the results.
One such solution is python-nose, which will look for methods named test_* and run them. So if you were to rename RestTest to test_rest, you could run:
$ nosetests myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
The nosetests command has a variety of options that control which tests are run, how errors are handled and reported, and more.
Python 3 includes similar functionality in the unittest module, which is also available as a backport for Python 2 called unittest2. You could modify your code to take advantage of unittest like this:
#!/usr/bin/python
from sure import expect
import requests, httpretty
import unittest
class RestTest(unittest.TestCase):
#httpretty.activate
def test_rest(self):
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"})
if __name__ == '__main__':
unittest.main()
Running your file would now provide output similar to what we saw with
nosetests:
$ python myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
Have you tried calling your method?
Or does the annotation mean you don't have to explicitly call your method?
If I call your method, it seems like it works. If I change the value on one side of the expect, it complains properly about the values not matching.

How to list available tests with python?

How to just list all discovered tests?
I found this command:
python3.4 -m unittest discover -s .
But it's not exactly what I want, because the above command executes tests. I mean let's have a project with a lot of tests. Execution time is a few minutes. This force me to wait until tests are finished.
What I want is something like this (above command's output)
test_choice (test.TestSequenceFunctions) ... ok
test_sample (test.TestSequenceFunctions) ... ok
test_shuffle (test.TestSequenceFunctions) ... ok
or even better, something more like this (after editing above):
test.TestSequenceFunctions.test_choice
test.TestSequenceFunctions.test_sample
test.TestSequenceFunctions.test_shuffle
but without execution, only printing tests "paths" for copy&paste purpose.
Command line command discover is implemented using unittest.TestLoader. Here's the somewhat elegant solution
import unittest
def print_suite(suite):
if hasattr(suite, '__iter__'):
for x in suite:
print_suite(x)
else:
print(suite)
print_suite(unittest.defaultTestLoader.discover('.'))
Running example:
In [5]: print_suite(unittest.defaultTestLoader.discover('.'))
test_accounts (tests.TestAccounts)
test_counters (tests.TestAccounts)
# More of this ...
test_full (tests.TestImages)
This works because TestLoader.discover returns TestSuite objects, that implement __iter__ method and therefore are iterable.
You could do something like:
from your_tests import TestSequenceFunctions
print('\n'.join([f.__name__ for f in dir(TestSequenceFunctions) if f.__name__.startswith('test_')]))
I'm not sure if there is an exposed method for this via unittest.main.

Running single test function in Nose NOT associated with unittest subclass

nose discovers tests beginning with test_, as well as subclasses of unittest.TestCase.
If one wishes to run a single TestCase test, e.g.:
# file tests.py
class T(unittest.TestCase):
def test_something():
1/0
This can be done on the command line with:
nosetests tests:T.test_something
I sometimes prefer to write a simple function and skip all the unittest boilerplate:
def test_something_else():
assert False
In that case, the test will still be run by nose when it runs all my tests. But how can I tell nose to run only that test, using the (Unix) command line?
That would be:
nosetests tests:test_something_else
An additional tip is to use attributes
from nose.plugins.attrib import attr
#attr('now')
def test_something_else():
pass
To run all tests tagged with the attribute, execute:
nosetests -a now
Inversely, avoid running those tests:
nosetests -a !now

Categories

Resources