Using pytest.fixture to opt out of function - python

I have a function that I don't want to run every time I run tests in my Flask-RESTFul API. This is an example of the setup:
class function(Resource):
def post(self):
print 'test'
do_action()
return {'success':True}
in my test I want to run this function, but ignore do_action(). How would I make this happen using pytest?

This seems like a good opportunity to mark the tests
#pytest.mark.foo_test
class function(Resource):
def post(self):
print 'test'
do_action()
return {'success':True}
Then if you call with
py.test -v -m foo_test
It will run only those tests marked "foo_test"
If you call with
py.test -v -m "not foo_test"
It will run all tests not marked "foo_test"

You can mock do_action in your test:
def test_post(resource, mocker):
m = mocker.patch.object(module_with_do_action, 'do_action')
resource.post()
assert m.call_count == 1
So the actual function will not be called in this test, with the added benefit
that you can check if the post implementation is actuall calling the function.
This requires pytest-mocker to
be installed (shameless plug).

Related

Mocking Azure BlobServiceClient in Python

I am trying to write a unit test that will test azure.storage.blob.BlobServiceClient class and its methods. Below is my code
A fixture in the conftest.py
#pytest.fixture
def mock_BlobServiceClient(mocker):
azure_ContainerClient = mocker.patch("azure.storage.blob.ContainerClient", mocker.MagicMock())
azure_BlobServiceClient= mocker.patch("azure_module.BlobServiceClient", mocker.MagicMock())
azure_BlobServiceClient.from_connection_string.return_value
azure_BlobServiceClient.get_container_client.return_value = azure_ContainerClient
azure_ContainerClient.list_blob_names.return_value = "test"
azure_ContainerClient.get_container_client.list_blobs.return_value = ["test"]
yield azure_BlobServiceClient
Contents of the test file
from azure_module import AzureBlob
def test_AzureBlob(mock_BlobServiceClient):
azure_blob = AzureBlob()
# This assertion passes
mock_BlobServiceClient.from_connection_string.assert_called_once_with("testconnectionstring")
# This assertion fails
mock_BlobServiceClient.get_container_client.assert_called()
Contents of the azure_module.py
from azure.storage.blob import BlobServiceClient
import os
class AzureBlob:
def __init__(self) -> None:
"""Initialize the azure blob"""
self.azure_blob_obj = BlobServiceClient.from_connection_string(os.environ["AZURE_STORAGE_CONNECTION_STRING"])
self.azure_container = self.azure_blob_obj.get_container_client(os.environ["AZURE_CONTAINER_NAME"])
My test fails when I execute it with below error message
> mock_BlobServiceClient.get_container_client.assert_called()
E AssertionError: Expected 'get_container_client' to have been called.
I am not sure why it says that the get_container_client wasn't called when it was called during the AzureBlob's initialization.
Any help is very much appreciated.
Update 1
I believe this is a bug in the unittest's MagicMock itself. Per
Michael Delgado suggested that I dialed the code to a bare minimum to test and identify the issue, and I concluded that the MagicMock was causing the problem. Below are my findings:
conftest.py
#pytest.fixture
def mock_Blob(mocker):
yield mocker.patch("module.BlobServiceClient")
test_azureblob.py
def test_AzureBlob(mock_Blob):
azure_blob = AzureBlob()
print(mock_Blob)
print(mock_Blob.mock_calls)
print(mock_Blob.from_connection_string.mock_calls)
print(mock_Blob.from_connection_string.get_container_client.mock_calls)
assert False # <- Intentional fail
After running the test, I got the following results.
$ pytest -vv
.
.
.
------------------------------------------------------------------------------------------- Captured stdout call -------------------------------------------------------------------------------------------
<MagicMock name='BlobServiceClient' id='140704187870944'>
[call.from_connection_string('AZURE_STORAGE_CONNECTION_STRING'),
call.from_connection_string().get_container_client('AZURE_CONTAINER_NAME')]
[call('AZURE_STORAGE_CONNECTION_STRING'),
call().get_container_client('AZURE_CONTAINER_NAME')]
[]
.
.
.
The prints clearly show that the get_container_client was seen being called, but the mocked method did not register it at its level. That led me to conclude that the MagicMock has a bug which I will report to the developers for further investigation.

Python nose processes parameter and dynamically generated tests

I have the minimal working example below for dynamically generating test cases and running them using nose.
class RegressionTests(unittest.TestCase):
"""Our base class for dynamically created test cases."""
def regression(self, input):
"""Method that runs the test."""
self.assertEqual(1, 1)
def create(input):
"""Called by create_all below to create each test method."""
def do_test(self):
self.regression(input)
return do_test
def create_all():
"""Create all of the unit test cases dynamically"""
logging.info('Start creating all unit tests.')
inputs = ['A', 'B', 'C']
for input in inputs:
testable_name = 'test_{0}'.format(input)
testable = create(input)
testable.__name__ = testable_name
class_name = 'Test_{0}'.format(input)
globals()[class_name] = type(class_name, (RegressionTests,), {testable_name: testable})
logging.debug('Created test case %s with test method %s', class_name, testable_name)
logging.info('Finished creating all unit tests.')
if __name__ == '__main__':
# Create all the test cases dynamically
create_all()
# Execute the tests
logging.info('Start running tests.')
nose.runmodule(name='__main__')
logging.info('Finished running tests.')
When I run the tests using python nose_mwe.py --nocapture --verbosity=2, they run fine and I get the output:
test_A (__main__.Test_A) ... ok
test_B (__main__.Test_B) ... ok
test_C (__main__.Test_C) ... ok
However, when I try to use the processes command line parameter to make the tests run in parallel, e.g. python nose_mwe.py --processes=3 --nocapture --verbosity=2, I get the following errors.
Failure: ValueError (No such test Test_A.test_A) ... ERROR
Failure: ValueError (No such test Test_B.test_B) ... ERROR
Failure: ValueError (No such test Test_C.test_C) ... ERROR
Is there something simple that I am missing here to allow the dynamically generated tests to run in parallel?
as far as I can tell you just need to make sure that create_all is run in every test process. just moving it out of the __main__ test works for me, so the end of the file would look like:
# as above
# Create all the test cases dynamically
create_all()
if __name__ == '__main__':
# Execute the tests
logging.info('Start running tests.')
nose.runmodule(name='__main__')
logging.info('Finished running tests.')

how to pass parameters to pytest test?

Say I have this test in tests.py
def test_user(username='defaultuser'):
Case 1
I want to pass the username to test from the command line, something like
$ pytest tests.py::test_user user1 # could be --username=user1
How do I do that?
Case 2
I want to pass a list of usernames to test, like
$ pytest tests.py::test_user "user1, user2, user3"
I want to achieve something like
#pytest.mark.parametrize("username", tokenize_and_validate(external_param))
def test_user(username):
pass
def tokenize_and_validate(val):
if not val:
return 'defaultuser'
return val.split(',')
How can I do that?
Thank you
When you pass a parameter from the command line at first you need to create a generator method to get the value from the command line this method run every test.
def pytest_generate_tests(metafunc):
# This is called for every test. Only get/set command line arguments
# if the argument is specified in the list of test "fixturenames".
option_value = metafunc.config.option.name
if 'name' in metafunc.fixturenames and option_value is not None:
metafunc.parametrize("name", [option_value])
Then you can run from the command line with a command line argument:
pytest -s tests/my_test_module.py --name abc
Follow the link for more details
To mock the data, you can use fixtures or use builtin unittest mocks.
from unittest import mock
#mock.patch(func_to_mock, side_effect=func_to_replace)
def test_sth(*args):
pass
Command line options are also available.

Writing a pytest function for checking the output on console (stdout)

This link gives a description how to use pytest for capturing console outputs.
I tried on this following simple code, but I get error
import sys
import pytest
def f(name):
print "hello "+ name
def test_add(capsys):
f("Tom")
out,err=capsys.readouterr()
assert out=="hello Tom"
test_add(sys.stdout)
Output:
python test_pytest.py
hello Tom
Traceback (most recent call last):
File "test_pytest.py", line 12, in <module>
test_add(sys.stdout)
File "test_pytest.py", line 8, in test_add
out,err=capsys.readouterr()
AttributeError: 'file' object has no attribute 'readouterr'
what is wrong and what fix needed? thank you
EDIT:
As per the comment, I changed capfd, but I still get the same error
import sys
import pytest
def f(name):
print "hello "+ name
def test_add(capfd):
f("Tom")
out,err=capfd.readouterr()
assert out=="hello Tom"
test_add(sys.stdout)
Use the capfd fixture.
Example:
def test_foo(capfd):
foo() # Writes "Hello World!" to stdout
out, err = capfd.readouterr()
assert out == "Hello World!"
See: http://pytest.org/en/latest/fixture.html for more details
And see: py.test --fixtures for a list of builtin fixtures.
Your example has a few problems. Here is a corrected version:
def f(name):
print "hello {}".format(name)
def test_f(capfd):
f("Tom")
out, err = capfd.readouterr()
assert out == "hello Tom\n"
Note:
Do not use sys.stdout -- Use the capfd fixture as-is as provided by pytest.
Run the test with: py.test foo.py
Test Run Output:
$ py.test foo.py
====================================================================== test session starts ======================================================================
platform linux2 -- Python 2.7.5 -- pytest-2.4.2
plugins: flakes, cache, pep8, cov
collected 1 items
foo.py .
=================================================================== 1 passed in 0.01 seconds ====================================================================
Also Note:
You do not need to run your Test Function(s) in your test modules. py.test (The CLI tool and Test Runner) does this for you.
py.test does mainly three things:
Collect your tests
Run your tests
Display statistics and possibly errors
By default py.test looks for (configurable iirc) test_foo.py test modules and test_foo() test functions in your test modules.
The problem is with your explicit call of your test function at the very end of your first code snippet block:
test_add(sys.stdout)
You should not do this; it is pytest's job to call your test functions.
When it does, it will recognize the name capsys (or capfd, for that matter)
and automatically provide a suitable pytest-internal object for you as a call argument.
(The example given in the pytest documentation is quite complete as it is.)
That object will provide the required readouterr() function.
sys.stdout does not have that function, which is why your program fails.

nosetests not running all the tests in a file

i run
nosetests -v --nocapture --nologcapture tests/reward/test_stuff.py
i get
----------------------------------------------------------------------
Ran 7 tests in 0.005s
OK
the tests without decorators run fine, however i have some fixture tests setup liike so which aren't running in the above command:
#use_fixtures(fixtures.ap_user_data, fixtures.ap_like_data, fixtures.other_reward_data)
def test_settings_are_respected(data):
"""
Tests setting takes effect
"""
res = func()
assert len(res) > 0
the decorator is:
def use_fixtures(*fixtures):
"""
Decorator for tests, abstracts the build-up required when using fixtures
"""
def real_use_fixtures(func):
def use_fixtures_inner(*args, **kwargs):
env = {}
for cls in fixtures:
name = cls.__name__
table = name[:-5] #curtails _data
env[table] = get_orm_class(table)
fixture = SQLAlchemyFixture(env=env, engine=engine, style=TrimmedNameStyle(suffix="_data"))
return fixture.with_data(*fixtures)(func)()
return use_fixtures_inner
return real_use_fixtures
is my use of decorators stopping nosetests from running my tests ?
I don't know if your decorator is causing nose to miss tests, but what if you take a test with a decorator, and remove the decorator long enough to see if it's still missed?

Categories

Resources