How to organize fixtures when using pytest - python

Fixtures tend to be small and reusable. Given that a specific fixture can rely on other fixtures
#pytest.fixture
def Account(db, memcache):
...
I would like to organize my fixtures in modules, and import them in a specific test-file like so (e.g.)
from .fixtures.models import Account
Unfortunately this does not seem to work. Instead I always have to import all subordinate fixtures too, e.g.
from .fixtures.models import Account, db, memcache
What is the better approach to have fine-grained small, reusable fixtures and make them accessible on a module level. (conftest works on the package/directory level.)

Usually i don't recommend this but if you have modules containing a specific set of fixtures (Which depend on each other), then maybe from .fixtures.models import * would be workable? I can't think of another generic solution at the moment which would avoid knowing the underlying fixture dependencies in the importing test module.

Related

Is it possible to enforce via CI that module_a does not import anything from module_b?

I'm maintaining several open source projects and I want to write code at work that nudges people to do the right thing.
I have a situation where I see people importing stuff in module_a from module_b, but that should not happen. There are two reasons for it:
Production code importing stuff from test code: I hope I don't need to explain why that is a bad idea.
Import Cycles: Some modules are so basic, that they should not import any other modules from the package (e.g. constants.py / errors.py / utils.py).
For this question, you can assume that all imports happen on module level (hence not inside a function).
Is it possible to enforce via CI (e.g. mypy / pytest / flake8) that module_a does not import anything from module_b?

session scope with pytest-mock

I'm looking for example of how to use the session-scoped "session-mocker" fixture of the pytest-mock plugin.
It's fairly clear how to modify the example the docs provide to use it in a particular test:
def test_foo(session_mocker):
session_mocker.patch('os.remove')
etc...
But I'm baffled as to where and how this global fixture should be initialized. Say, for example, that I want to mock "os.remove" for ALL of my tests. Do I set this up in confftest.py and, if so, how do I do it?
You use it in a fixture with a scope of session. The best place to put it would be conftest.py. That's mainly to make it obvious to other programmers that this fixture exists and what it might be doing. That's important because this fixture will effect other tests that might not necessarily know about this fixture or even want it.
I wouldn't recommend mocking something for the duration of a session. Tests, classes or even modules, yes. But not sessions.
For instance, the following test test_normal passes or fails depending on whether test_mocked was run in the same session or not. Since they're in the same "file", it's much easier to spot the problem. But these tests could be located in different test files, that do not appear related, and yet if both tests were run in the same session then the problem would occur.
import pytest
# could be in conftest.py
#pytest.fixture(scope='session')
def myfixture(session_mocker):
session_mocker.patch('sys.mymock', create=True)
def test_mocked(myfixture):
import sys
assert hasattr(sys, 'mymock')
def test_normal():
import sys
assert not hasattr(sys, 'mymock')
Instead, just create a fixture that is scoped for test, class or module, and include it directly in the test file. That way the behaviour is contained to just the set of tests that need it. Mocks are cheap to create, so having the mock recreated for every test is no big deal. It may even be beneficial, as the mock will be reset for each test.
Save session fixtures for things that are expensive to setup, and which have no state, or that the tests do not change its state (eg. a database that is used as a template to create a fresh database that each test will run against).

How to use py.test fixtures without importing them

Say I have a file fixtures.py that defines a simple py.test fixture called foobar.
Normally I would have to import that fixture to use it (including all of the sub-fixtures), like this:
from fixtures import foobar
def test_bazinga(foobar):
Note that I also don't want to use a star import.
How do I import this fixture so that I can just write:
import fixtures
def test_bazinga(foobar):
Is this even possible? It seems like it, because py.test itself defines exactly such fixtures (e.g. monkeypatch).
Fixtures and their visibility are a bit odd in pytest. They don't require importing, but if you defined them in a test_*.py file, they'll only be available in that file.
You can however put them in a (project- or subfolder-wide) conftest.py to use them in multiple files.
pytest-internal fixtures are simply defined in a core plugin, and thus available everywhere. In fact, a conftest.py is basically nothing else than a per-directory plugin.
You can also run py.test --fixtures to see where fixtures are coming from.

Python module getting too big

My module is all in one big file that is getting hard to maintain. What is the standard way of breaking things up?
I have one module in a file my_module.py, which I import like this:
import my_module
"my_module" will soon be a thousand lines, which is pushing the limits of my ability to keep everything straight. I was thinking of adding files my_module_base.py, my_module_blah.py, etc. And then, replacing my_module.py with
from my_module_base import *
from my_module_blah import *
# etc.
Then, the user code does not need to change:
import my_module # still works...
Is this the standard pattern?
It depends on what your module is doing actually. Usually it is always a good idea to make your module a directory with an '__init__.py' file inside. So you would first transform your your_module.py to something like your_module/__init__.py.
After that you continue according to your business logic. Here some examples:
do you have utility functions which are not directly used by the modules API put them in some file called utils.py
do you have some classes dealing with the database or representing your database models put them in models.py
do you have some internal configuration it might make sense to put it into some extra file called settings.py or config.py
These are just examples (a little bit stolen from the Django approach of reusable apps ^^). As said, it depends a lot what your module does. If it is still too big afterwards it also makes sense to create submodules (as subdirectories with their own __init__.py).
i'm sure there are lots of opinions on this, but I'd say you break it into more well-defined functional units (modules), contained in a package. Then you use:
from mypackage import modulex
Then use the package name to reference the object:
modulex.MyClass()
etc.
You should (almost) never use
from mypackage import *
Since that can introduce bugs (duplicate names from different modules will end up clobbering one).
No, that is not the standard pattern. from something import * is usually not a good practice as it will import lot of things you did not intend to. Instead follow the same approach as you did, but include the modules specifically from one to another for e.g.
In base.py if you are having def myfunc then in main.py use from base import myfunc So that for your users, main.myfunc would work too. Of course, you need to take care that you don't end up doing a circular import.
Also, if you see that from something import * is required, then control the import values using the __all__ construct.

Python - optimize by not importing at module level?

In a framework such as Django, I'd imagine that if a user lands on a page (running a view function called "some_page"), and you have 8 imports at the top of module that are irrelevant to that view, you're wasting cycles on those imports. My questions are:
Is it a large enough amount of resources to make an impact on a high-traffic website?
Is it such a bad practice to import inside of a function for this purpose that it should be avoided at said impact?
Note: This could be considered premature optimization, but I'm not interested in that argument. Let's assume, for the sake of practical theory, that this is a completed site with loads of traffic, needing to be optimized in every way possible, and the application code, as well as DB have been fully optimized by 50 PhD database admins and developers, and these imports are the only thing left.
No, don't do this. In a normal python execution environment on the web (mod_wsgi, gunicorn, etc.) when your process starts those imports will be executed, and then all subsequent requests will not re-execute the script. If you put the imports inside the functions they'll have to be processed every time the function is called.
Yes, it is a bad practice to import at the function level. By using smarter imports at the top of the module, you create a one time, small cost. However, if you place an import in a function you will suffer the cost of the import each time that function is run. So, rather than import in the function, just import at the top of the module.
A few things you can do to clean up and improve your imports:
Don't use wild imports e.g. from x import *
Where possible, just use a normal import e.g. import x
Try to split your code up into smaller modules that can be called separately, so that fewer imports are made
Also, placing imports at the top of the module is a matter of style. There's a reason why PEP 8 says that modules need to be imported at the top. It's far more readable and maintainable that way.
Finally, some imports at function level will cause compatibility issues in the future, as from x import * is not valid Python 3.x at function level.
1) The answer is no. Django/Python is not like PHP. Your whole module will not be reinterpreted with each pageview like happens with PHP includes. The module will be in memory and each page view will make a simple function call to your view.
2) Yes, it will be a counter-optimization to make imports at the view level.
No. Same reason as other answers.
Yes. Same reason as other answes.
BTW, you can also do import lazily.
For example, Importing toolkit can "import" a module in top level code, but the module is not actually loaded until one of the attributes is accessed.
Sometimes following boiler-plate makes sense:
foo = None
def foorify():
global foo
if not foo: from xxx import foo
foo.bar()
This makes sense when foorification is conditional on something that rarely changes, e.g. one server foorifies while another never does or if you don't want or cannot safely import foo during application startup or most tests.

Categories

Resources