Could you explain to me please why fixture with scope function(which is supposed to run anew for each test) runs for the whole test class?
#pytest.fixture(scope="function")
def application_with_assigned_task_id(api, db, application_with_tasks_id, set_user_with_st_types):
with allure.step("ищу задание по по id заявки"):
task = \
api.grpc.express_shipping.list_tasks(page_size=100, page_number=1, eams_id=application_with_tasks_id)[
'tasks'][
0]
task_id = task["id"]
with allure.step("назначаю задания на пользователя"):
db.rezon.upd_user_warehouse(set_user_with_st_types, 172873)
db.rezon.storetask_upd_user(task_id, set_user_with_st_types)
with allure.step("проверяю назначение задания"):
res = api.grpc.express_shipping.get_task(id=int(task_id))
assert res["storekeeper"] == db.rezon.get_rezon_user_full_name_by_id(set_user_with_st_types)
return application_with_tasks_id
application_with_tasks_id fixture has function scope as well, set_user_with_st_types has session scope(which I do not wish to change) What can I do?
I tried setting the scope specifically, even though I thought it normal for the fixture to run for each test anew by default
Setting the scope did not work
This simple example shows clearly that the tests are executed for each function and that's all (the output file will contain two lines). If it is not the case for you, please follow hoefling's comment and create a reproducible example (your code is both not complete and contains too many irrelevant things).
import pytest
#pytest.fixture # (scope="function") is the default
def setup():
f = open("/tmp/log", "a")
print("One setup", file=f)
f.close()
def test_one(setup):
assert True
def test_two(setup):
assert True
Related
I have the following example:
conftest.py:
#pytest.fixture:
def my_fixture_1(main_device)
yield
if FAILED:
-- code lines --
else:
pass
main.py:
def my_test(my_fixture_1):
main_device = ...
-- code lines --
assert 0
-- code lines --
assert 1
When assert 0, for example, the test should fail and execute my_fixture_1. If the test pass, the fixture must not execute. I tried using hookimpl but didn't found a solution, the fixture is always executing even if the test pass.
Note that main_device is the device connected where my test is running.
You could use request as an argument to your fixture. From that, you can check the status of the corresponding tests, i.e. if it has failed or not. In case it failed, you can execute the code you want to get executed on failure. In code that reads as
#pytest.fixture
def my_fixture_1(request):
yield
if request.session.testsfailed:
print("Only print if failed")
Of course, the fixture will always run but the branch will only be executed if the corresponding test failed.
In Simon Hawe's answer, request.session.testsfailed denotes the number of test failures in that particular test run.
Here is an alternative solution that I can think of.
import os
#pytest.fixture(scope="module")
def main_device():
return None
#pytest.fixture(scope='function', autouse=True)
def my_fixture_1(main_device):
yield
if os.environ["test_result"] == "failed":
print("+++++++++ Test Failed ++++++++")
elif os.environ["test_result"] == "passed":
print("+++++++++ Test Passed ++++++++")
elif os.environ["test_result"] == "skipped":
print("+++++++++ Test Skipped ++++++++")
def pytest_runtest_logreport(report):
if report.when == 'call':
os.environ["test_result"] = report.outcome
You can do your implementations directly in the pytest_runtest_logreport hook itself. But the drawback is that you won't get access to the fixtures other than the report.
So, if you need main_device, you have to go with a custom fixture like as shown above.
Use #pytest.fixture(scope='function', autouse=True) which will automatically run it for every test case. you don't have to give main_device in all test functions as an argument.
We have unit tests running via Pytest, which use a custom decorator to start up a context-managed mock echo server before each test, and provide its address to the test as an extra parameter. This works on Python 2.
However, if we try to run them on Python 3, then Pytest complains that it can't find a fixture matching the name of the extra parameter, and the tests fail.
Our tests look similar to this:
#with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, url):
res_id = self._test_resource(url)['id']
result = update_resource(None, res_id)
assert not result, result
self.assert_archival_error('Server reported status error: 404 Not Found', res_id)
With a decorator function like this:
from functools import wraps
def with_mock_url(url=''):
"""
Start a MockEchoTestServer and call the decorated function with the server's address prepended to ``url``.
"""
def decorator(func):
#wraps(func)
def decorated(*args, **kwargs):
with MockEchoTestServer().serve() as serveraddr:
return func(*(args + ('%s/%s' % (serveraddr, url),)), **kwargs)
return decorated
return decorator
On Python 2 this works; the mock server starts, the test gets a URL similar to "http://localhost:1234/?status=404&content=test&content-type=csv", and then the mock is shut down afterward.
On Python 3, however, we get an error, "fixture 'url' not found".
Is there perhaps a way to tell Python, "This parameter is supplied from elsewhere and doesn't need a fixture"? Or is there, perhaps, an easy way to turn this into a fixture?
You can use url as args parameter
#with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, *url):
url[0] # the test url
Looks like Pytest is content to ignore it if I add a default value for the injected parameter, to make it non-mandatory:
#with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, url=None):
The decorator can then inject the value as intended.
consider separating the address from the service of the url. Using marks and changing fixture behavior based on the presence of said marks is clear enough. Mock should not really involve any communication, but if you must start some service, then make it separate from
with_mock_url = pytest.mark.mock_url('http://www.darknet.go')
#pytest.fixture
def url(request):
marker = request.get_closest_marker('mock_url')
if marker:
earl = marker.args[0] if args else marker.kwargs['fake']
if earl:
return earl
try:
#
earl = request.param
except AttributeError:
earl = None
return earl
#fixture
def server(request):
marker = request.get_closest_marker('mock_url')
if marker:
# start fake_server
#with_mock_url
def test_resolve(url, server):
server.request(url)
I have implemented unit tests for my Flask app. I use pytest. My goal is to make sure that a number of pre-defined queries always return the same output (some json).
In order to do that, I use a fixture to start an app with test settings:
# Session for scope, otherwise server is reloaded everytime
#pytest.fixture(scope="session")
def app():
os.environ["FLASK_ENV"] = "development"
os.environ["TEST_SETTINGS"] = os.path.join(
ds.WORK_DIR, "..", "tests", "settings.py"
)
app = create_app()
# http://flask.pocoo.org/docs/1.0/api/#flask.Flask.test_client
app.testing = True
return app
And then I run a test function, which is not really a test function, with pytest:
# Freeze time for consistent output
#pytest.mark.usefixtures("live_server")
#pytest.mark.freeze_time("2018-01-01")
class TestLiveServer:
#pytest.mark.skip(reason="should only be used to update the test data")
def test_export(self):
"""Not a test function.
Used to export questions outputs in json file to freeze the expected output
"""
for q in TESTED_QUESTIONS:
r = requests.post(
url_for("query_direct", _external=True), json={"query": q}
)
print(r.text)
filename = (
"".join(filter(lambda c: str.isalnum(c) or c == " ", q))
.lower()
.replace(" ", "_")
+ ".json"
)
export_path = os.path.join("tests", "fake_responses", filename)
data = {"query": q, "response": r.json()}
with open(export_path, "w") as outfile:
json.dump(data, outfile, indent=4, sort_keys=True)
outfile.write("\n")
To generate my frozen outputs, I uncomment the pytest marker and I run this particular test. As you can see, it's not very elegant and error prone. Ssometimes I forget to re-enable the marker, annd if I run all the tests at once, it first re-generates the fake outputs before running unit tests on them. If this happens, my tests don't fail and I don't catch the potential mistakes (whhich kills the point of these tests).
Is there a way to run this particular function alone, maybe with some pytest flag or something?
This should probably be a standalone thing and not part of the pytest test suite (since it's not a test), but you can get around it by exploiting pytest's skipping facilities.
# redefine test_export to feature a conditional skip
#pytest.mark.skipif(not os.getenv('FREEZE_OUTPUTS'), reason='')
def test_export(self):
Here we skip this test, unless the FREEZE_OUTPUTS environment variable is set.
You can then run this test by calling the following from the command line (define the environment variable for this invocation only):
$ FREEZE_OUTPUTS=1 py.test <your_test_file>::TestLiveServer::test_export
Which will only run that test. In all other cases it will be skipped.
You can even follow the above approach and declare this as a fixture that you include on a session level with autouse=True so it's always included and then in the fixture itself add some logic to check if you've defined FREEZE_OUTPUTS and if so, run the logic in question. Something like:
#pytest.fixture(scope='session', autouse=True)
def frozen_outputs():
if os.getenv('FREEZE_OUTPUTS'):
# logic for generating the outputs goes here
return # do nothing otherwise
As per the pytest documentation, it possible to override the default temporary directory setting as follows:
py.test --basetemp=/base_dir
When the tmpdir fixture is then used in a test ...
def test_new_base_dir(tmpdir):
print str(tmpdir)
assert False
... something like the following would then be printed to the screen:
/base_dir/test_new_base_dir_0
This works as intended and for certain use cases can be very useful.
However, I would like to be able to change this setting on a per-test (or perhaps I should say a "per-fixture") basis. Is such a thing possible?
I'm close to just rolling my own tmpdir based on the code for the original, but would rather not do this -- I want to build on top of existing functionality where I can, not duplicate it.
As an aside, my particular use case is that I am writing a Python module that will act on different kinds of file systems (NFS4, etc), and it would be nice to be able to yield the functionality of tmpdir to be able to create the following fixtures:
def test_nfs3_stuff(nfs3_tmpdir):
... test NFS3 functionality
def test_nfs4_stuff(nfs4_tmpdir):
... test NFS4 functionality
In the sources of TempdirFactory the .config.option.basetemp is used as the attribute to store the basetemp. So you can directly set it before the usage:
import pytest
import time
import os
def mktemp_db(tmpdir_factory, db):
basetemp = None
if 'PYTEST_TMPDIR' in os.environ:
basetemp = os.environ['PYTEST_TMPDIR']
if basetemp:
tmpdir_factory.config.option.basetemp = basetemp
if db == "db1.db":
tmpdb = tmpdir_factory.mktemp('data1_').join(db)
elif db == "db2.db":
tmpdb = tmpdir_factory.mktemp('data2_').join(db)
return tmpdb
#pytest.fixture(scope='session')
def empty_db(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db1.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
#pytest.fixture(scope='session')
def empty_db2(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db2.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
def test_empty_db(empty_db):
pass
def test_empty_db2(empty_db2):
pass
-
>set PYTEST_TMPDIR=./tmp
>python.exe -m pytest -q -s test_my_db.py
* c:\tests\tmp\data1_0\db1.db
.* c:\tests\tmp\data2_0\db2.db
.
2 passed in 10.03 seconds
There didn't appear to be a nice solution to the problem as posed in the question so I settled on making two calls to py.test:
Passing in a different --basetemp for each.
Marking (using #pytest.mark.my_mark) which tests needed the special treatment of using a non-standard basetemp.
Passing -k my_mark or -k-my_mark into each call.
I have a pytest test like so:
email_two = generate_email_two()
#pytest.mark.parametrize('email', ['email_one#example.com', email_two])
def test_email_thing(self, email):
... # Run some test with the email parameter
Now, as part of refactoring, I have moved the line:
email_two = generate_email_two()
into its own fixture (in a conftest.py file), as it is used in various other places. However, is there some way, other than importing the fixture function directly, of referencing it in this test? I know that funcargs are normally the way of calling a fixture, but in this context, I am not inside a test function when I am wanting to call the fixture.
I ended up doing this by having a for loop in the test function, looping over each of the emails. Not the best way, in terms of test output, but it does the job.
import pytest
import fireblog.login as login
pytestmark = pytest.mark.usefixtures("test_with_one_theme")
class Test_groupfinder:
def test_success(self, persona_test_admin_login, pyramid_config):
emails = ['id5489746#mockmyid.com', persona_test_admin_login['email']]
for email in emails:
res = login.groupfinder(email)
assert res == ['g:admin']
def test_failure(self, pyramid_config):
fake_email = 'some_fake_address#example.com'
res = login.groupfinder(fake_email)
assert res == ['g:commenter']