I want to my Django tests to create and modify media files. So, much like Django tests do with databases, I want to set up an empty MEDIA_ROOT folder before each test is run.
I figured I'll create a temporary folder and point MEDIA_ROOT to it. However, I can't figure out where to put the code that does this. In this example, a special Runner is created. The runner sets up the media root and tears it down.
Unfortunately, setup_test_environment is called once before the first test function is run, and not every time a test is run.
I tried creating a FileSystemTestCase class that sets up the file system in its setUp function, and have all my test cases derive from it. While this works, it requires every person who writes a testcase to remember to call my setUp method, as it's not called automatically.
Usually I wouldn't bother with this, but the cost of forgetting to call the parent setUp method can be very high - if someone forgets the call and the tests are accidentally run on a live system, bad things will happen.
EDIT: The temporary solution I've found was to implement both my own runner and a base TestCase. Both set up a temporary MEDIA_ROOT, so if someone forgets to call my setUp method, the test will run in the temporary folder of the previous test, or the one set up by the runner. This can cause tests to fail, but will not ruin live data.
I'm hoping for a more elegant solution.
It seems to me that you're trying to address two separate issues:
Allow tests to run independently (with regard to MEDIA_ROOT) when testers do the right thing (i.e. inherit from your test class and call your setUp()).
Keep testers from messing up real data when they accidentally do the wrong thing.
Given that, I think a two-pronged approach makes sense. Your setUp() solves problem 1. Setting up MEDIA_ROOT in the test runner, though, hides the fact that your testers have done the wrong thing. Instead I would just focus on protecting the data: for example, you could set MEDIA_ROOT to None. That would shield the real data in MEDIA_ROOT; make it more likely that the tester will see an error if they don't use your setUp(); and reduce code duplication.
A more robust approach would be to write your own test runner that does the setup before each test (modeled after the way Django handles the database), but that may be overkill for your needs.
Related
I have some pretty fragile code that I want to refactor. It's not very easy to unit test by itself because it interacts with database queries and Django form data.
That in itself is not a big deal. I already have extensive tests that, among other things, end up calling this function and check that results are as expected. But my full test suite takes about 5 minutes and I also don't want to have to fix other outstanding issues while working on this.
What I'd like to do is to run nosetests or nose2 on all my tests, track all test_xxx.py files that called the function of interest and then limit my testing during the refactoring to only that subset of test files.
I plan to use inspect.stack() to do this but was wondering if there is an existing plugin or if someone has done it before. If not, I intend to post whatever I come up with and maybe that will be of use later.
You can simply raise some exception in the function and do one run. All tests that fail do call you function.
I'm a bit confused regarding unittests. I have an embedded system I am testing from outside with Python.
The issue is that after each test is passed I need to reset the system state. However if a test fails it could leave the system in an arbitrary state I need to reset. After each test I go back to the initial state but if an assertion fails it will skip that part.
Therefore, what's the proper way to handle this situation? Some ideas I have are:
Put each test in a try, catch, finally but that doesn't seem so right (unittest already handles test exceptions).
Put each test in a different class and invoke tearDown() method at the end of it
Call initSystemState() at the beggining of each test to go back to init state (but it is slower than resetting only what needs to be reset at the end of the test)
Any suggestions? Ideally if I have testSpeed() test there should be a testSpeedOnStop() function to be called at the end. Perhaps unittest is not the right tool for this job also as all the functions have side-effects and are working together so maybe I should lean more towards integration tests libraries which I haven't explored.
Setting state is done in the setUp(self) method of the test class.
This method is automatically called prior to each test and provides a fresh state for each instance of the tests.
After each test, the tearDown method is run to possibly clean up remnants of failed tests.
You can write a setUp / tearDowm to be executed befora and after all tests; more elaborate tests may require stubbing or mocking objects to be build.
I'm building a test framework using python + pytest + xdist + selenium grid. This framework needs to talk to a pre-existing custom logging system. As part of this logging process, I need to submit API calls to: set up each new test run, set up test cases within those test runs, and log strings and screenshots to those test cases.
The first step is to set up a new test run, and the API call for that returns (among other things) a Test Run ID. I need to keep this ID available for all test cases to read. I'd like to just stick it in a global variable somewhere, but running my tests with xdist causes the framework to lose track of the value.
I've tried:
Using a "globals" class; it forgot the value when using xdist.
Keeping a global variable inside my conftest.py file; same problem, the value gets dropped when using xdist. Also it seems wrong to import my conftest everywhere.
Putting a "globals" class inside the conftest; same thing.
At this point, I'm considering writing it to a temp file, but that seems primitive, and I think I'm overlooking a better solution. What's the most correct, pytest-style way to store and access global data across multiple xdist threads?
Might be worth looking into Proboscis, as it allows specific test dependencies and could be a possible solution.
Can you try config.cache E.g. -
request.config.cache.set('run_id', run_id)
refer documention
I have some resource creation and deletion code that needs to run before and after certain tests, which I've put into a fixture using yield in the usual way. However, before running the tests, I want to verify that the resource creation has happened correctly, and likewise after the deletion, I want to verify that it has happened. I can easily stick asserts into the fixtures themselves, but I'm not sure this is good pytest practice, and I'm concerned that it will make debugging and interpreting the logs harder. Is there a better or canonical way to do validation in pytest?
I had encountered something like this recently - although, I was using unittest instead of pytest.
What I ended up doing was something similar to a method level setup/teardown. That way, future test functions would never be affected by past test functions.
For my use-case, I loaded my test fixtures in this setup function, then ran a couple of basic tests against those tests to ensure validity of fixtures (as part of setup itself). This, I realized, added a bit of time to each test in the class, but ensured that all the fixture data was exactly what I expected it to be (we were loading stuff into a dockerized elasticsearch container). I guess time for running tests is something you can make a judgement call about.
Example
Let's say you have a hypothetical API like this:
import foo
account_name = foo.register()
session = foo.login(account_name)
session.do_something()
The key point being that in order to do_something(), you need to be registered and logged in.
Here's an over-simplified, first-pass, suite of unit tests one might write:
# test_foo.py
import foo
def test_registration_should_succeed():
foo.register()
def test_login_should_succeed():
account_name = foo.register()
foo.login(account_name)
def test_do_something_should_succeed():
account_name = foo.register()
session = foo.login(account_name)
session.do_something()
The Problem
When registration fails, all the tests fail and that makes it non-obvious where
the real problem is. It looks like everything's broken, but really only one, crucial, thing is broken. It's hard to find that once crucial thing unless you are familiar with all the tests.
The Question
How do you structure your unit tests so that subsequent tests aren't executed when core functionality on which they depend fails?
Ideas
Here are possible solutions I've thought of.
Manually detect failures in each test and and raise SkipTest. - Works, but a lot of manual, error-prone work.
Leverage generators to not yield subsequent tests when the primary ones fail. - Not sure if this would actually work (because how do I "know" the previously yielded test failed).
Group tests into test classes. E.g., these are all the unit tests that require you to be logged in. - Not sure this actually addresses my issue. Wouldn't there be just as many failures?
Rather than answering the explicit question, I think a better answer is to use mock objects. In general, unit tests should not require accessing external databases (as is presumably required to log in). However, if you want to have some integration tests that do this (which is a good idea), then those tests should test the integration aspects, and your unit tests should tests the unit aspects. I would go so far as to keep the integration tests and the unit tests in separate files, so that you can quickly run the unit tests on a very regular basis, and run the integration tests on a slightly less regular basis (although still at least once a day).
This problem indicates that Foo is doing to much. You need to separate concerns. Then testing will become easy. If you had a Registrator, a LoginHandler and a DoSomething class, plus a central controller class that orchestrated the workflow, then everything could be tested separately.