Is there a way to detect that Python is running a test? - python

I want to suppress certain warning messages when Python is running in a test context.
Is there any way to detect this globally in Python?

No, you can't really detect whether or not you're in a test context, or you'd do it with a lot of unnecessary processing. For example: having a state variable in the testing package that you set up when you're running your tests. But then you would include that module (or variable) in all of your modules, which would be far from being elegant. Globals are evil.
The best way to implement filtering output based on the execution context is to use the logging module and make all unnecessary warning messages at a low level (like DEBUG) and ignore them when you run your tests.
Another option would be to add a level for all of the messages you explicitly ignore when running the tests.

Related

How can I detect when someone changes a python dict (and/or object)?

We have some unittests (a few out of several 1000) that are modifying pytest fixtures which leads to tests that fail for no apparent reason. We are running pytest -n 8 and the order of test execution isn't important to us (faster is better), but when one of these misbehaving tests comes in front of something that relies on that part of a fixture, we get a random unittest failure.
Is there some way to either make an object/dict immutable, or some way to raise an exception when that object is changed so that I can catch the offender in the act?
I'm wanting to protect something like this:
settings = load_settings(....)
settings = protect_settings(settings)
It seems sort of like the mock library with introspection for the capabilities of the object and then mirroring the actual actions with some exception throwing for the set actions. I'm hoping this has already been built.
We are still on python2.7 :-(
There is a simple implementation of a FrozeDict here. You could use this and you will get an error when you want to modify it as there is no __setitem__ method.

Storing global configuration data in a pytest/xdist framework

I'm building a test framework using python + pytest + xdist + selenium grid. This framework needs to talk to a pre-existing custom logging system. As part of this logging process, I need to submit API calls to: set up each new test run, set up test cases within those test runs, and log strings and screenshots to those test cases.
The first step is to set up a new test run, and the API call for that returns (among other things) a Test Run ID. I need to keep this ID available for all test cases to read. I'd like to just stick it in a global variable somewhere, but running my tests with xdist causes the framework to lose track of the value.
I've tried:
Using a "globals" class; it forgot the value when using xdist.
Keeping a global variable inside my conftest.py file; same problem, the value gets dropped when using xdist. Also it seems wrong to import my conftest everywhere.
Putting a "globals" class inside the conftest; same thing.
At this point, I'm considering writing it to a temp file, but that seems primitive, and I think I'm overlooking a better solution. What's the most correct, pytest-style way to store and access global data across multiple xdist threads?
Might be worth looking into Proboscis, as it allows specific test dependencies and could be a possible solution.
Can you try config.cache E.g. -
request.config.cache.set('run_id', run_id)
refer documention

Change logging level during testing execution

I have an application where I introduced the standard logging library, I just set it up to WARNING.
When running unittesting I would like to avoid that those errors and warnings are appearing (just because I am making them intentionally!), but I would like to keep the verbose from unittesting.
Is there any way I can have the standard application with a logging level (WARNING) and during testing in a different one (none or CRITICAL?)
For example, I want my application in normal mode of operation to show the following:
=====
Application started
ERROR = input file is wrong
=====
However, when running my unittesting I do not want any of those outputs to appear, as I will actually make the app fail to check the correct error tracking, so it will be redundant to show the error messages and actually will complicate detecting the problems.
Looking to stackoverflow I found some similar problems, but not fixing my issue:
The problem is with print, not with logging
Is there a way to suppress printing that is done within a unit test?
Just eliminating part of test verbosity
Turn some print off in python unittest
Any idea/help?
I'm still not 100% sure- I think what you want is to have log statements in your app that get suppressed during testing.
I would use Nosetests for this- it suppresses all stdout for passing tests and prints it for failing ones, which is just about perfect for your use case in my opinion.
A less good solution, just in case I don't understand you, is to define a test case class that all of your tests inherit from- it can have extra test methods or whatever (it should inherit from unittest.TestCase itself). The key though is that you can change the logging level to a higher/lower level in that file that only gets imported during testing, which will allow you to have special logging behavior during tests.
The behavior of nose though is the best- it still shows output on failing tests and captures print statements as well.

Python nose critical vs non-critical tests

I am still learning testing in Python and cannot find a straight forward answer to this question.
I am using python to drive selenium testing on my application. I run the test-suite with Nose.
Some tests are critical and must pass in order for code check-ins to be acceptable. Others tests will sometimes fail due to factors outside of our control and are not critical.
Is there a standard Nose Plugin that would allow me to specify which test are not critical and give me a report with that break down? Or is there some other standard way of doing this?
You can always use attrib plugin and decorate your critical tests with #attr('critical')
Within your CI, run nose twice, once with -a critical=True (and condition your checkin/deployment on that) and -a critical=False.
First of all, it's probably a bad idea at all to have tests that "are allowed to fail because they are not critical".
You should try to mitigate the influence of external factors as much as possible, and have an environment that allows you to consistently run your tests and especially reproduce any errors you may find.
That said, reality can differ a lot from from theory, so here we go:
Is there a standard Nose Plugin that would allow me to specify which
test are not critical and give me a report with that break down?
No, at list not among the built-in plugins or the third-party plugins mentioned on their website.
Or is there some other standard way of doing this?
For pyunit (and consequently nose), a test can only pass or fail; there is nothing in between.
To get a better overview when looking at test results, I would keep such tests in a separate test suite, independent from the regular "must-pass" tests.
Furthermore, if these unimportant tests are allowed to fail without blocking the check-in, it sounds fair to me that their execution should also be made optional.

Unit testing with nose: tests at compile time?

Is it possible for the nose unit testing framework to perform tests during the compilation phase of a module?
In fact, I'd like to test something with the following structure:
x = 123
# [x is used here...]
def test_x():
assert (x == 123)
del x # Deleted because I don't want to clutter the module with unnecessary attributes
nosetests tells me that x is undefined, as it apparently runs test_x() after importing the module. Is there a way of having nose perform test during the compilation phase while having the module free unnecessary resources after using them?
A simple way to handle this would be to have a TESTING flag, and write:
if not TESTING:
del x
However, you won't really be properly testing your modules as the tests will be running under different circumstances to your code.
The proper answer is that you shouldn't really be bothering with manually cleaning up variables, unless you have actually had some major performance problems because of them. Read up on Premature Optimization, it's an important concept. Fix the problems you have, not the ones you maybe could have one day.
According to nose's main developer Jason Pellerin, the nose unit testing framework cannot run tests during compilation. This is a potential annoyance if both the module "construction" and the test routines need to access a certain variable (which would be deleted in the absence of tests).
One option is to discourage the user from using any of these unnecessarily saved variables by prepending "__" to their name (this works also for variables used in class construction: they can be one of these "private" globals).
Another, perhaps cleaner option is to dedicate a module to the task: this module would contain variables that are shared by the module "itself" (i.e. without tests) and its tests (and that would not have to be shared were it not for the tests).
The problem with these option is that variables that could be deleted if there were no tests are instead kept in memory, just because it is better for the test code to use them. At least, with the above two options, the user should not be tempted to use these variables, nor should he feel the need to wonder what they are!

Categories

Resources