Change logging level during testing execution - python

I have an application where I introduced the standard logging library, I just set it up to WARNING.
When running unittesting I would like to avoid that those errors and warnings are appearing (just because I am making them intentionally!), but I would like to keep the verbose from unittesting.
Is there any way I can have the standard application with a logging level (WARNING) and during testing in a different one (none or CRITICAL?)
For example, I want my application in normal mode of operation to show the following:
=====
Application started
ERROR = input file is wrong
=====
However, when running my unittesting I do not want any of those outputs to appear, as I will actually make the app fail to check the correct error tracking, so it will be redundant to show the error messages and actually will complicate detecting the problems.
Looking to stackoverflow I found some similar problems, but not fixing my issue:
The problem is with print, not with logging
Is there a way to suppress printing that is done within a unit test?
Just eliminating part of test verbosity
Turn some print off in python unittest
Any idea/help?

I'm still not 100% sure- I think what you want is to have log statements in your app that get suppressed during testing.
I would use Nosetests for this- it suppresses all stdout for passing tests and prints it for failing ones, which is just about perfect for your use case in my opinion.
A less good solution, just in case I don't understand you, is to define a test case class that all of your tests inherit from- it can have extra test methods or whatever (it should inherit from unittest.TestCase itself). The key though is that you can change the logging level to a higher/lower level in that file that only gets imported during testing, which will allow you to have special logging behavior during tests.
The behavior of nose though is the best- it still shows output on failing tests and captures print statements as well.

Related

Python: How to access test result variables from unittest module

For a specific program I'm working in, we need to evaluate some code, then run a unittest, and then depending on whether or not the test failed, do A or B.
But the usual self.assertEqual(...) seems to display the results (fail, errors, success) instead of saving them somewhere, so I can't access that result.
I have been checking the modules of unittest for days but I can't figure out where does the magic happen or if there is somewhere a variable I can call to know the result of the test without having to read the screen (making the program read and try to find the words "error" or "failed" doesn't sound like a good solution).
After some days of researching, I sent an email to help#python.org and got the perfect solution for my issue.
The answer I got was:
I suspect that the reason that you're having trouble getting unittest
to do that is that that's not the sort of thing that unittest was
written to do. A hint that that's the case seems to me to be that over
at the documentation:
https://docs.python.org/3/library/unittest.html
there's a section on the command-line interface but nothing much about
using the module as an imported module.
A bit of Googling yields this recipe:
http://code.activestate.com/recipes/578866-python-unittest-obtain-the-results-of-all-the-test/
Which looks as though it might be useful to you but I can't vouch for
it and it seems to involve replacing one of the library's files.
(Replacing one of the library's files is perfectly reasonable in my
opinion. The point of Python's being open-source is that you can hack
it for yourself.)
But if I were doing what you're describing, I'd probably write my own
testing code. You could steal what you found useful from unittest
(kind of the inverse of changing the library in place). Or you might
find that your needs are sufficiently simple that a simple file of
testing code was sufficient.
If none of that points to a solution, let us know what you get and
I'll try to think some more.
Regards, Matt
After changing my result.py module from unittest, I'm able to access the value of the test (True, False, or Error).
Thank you very much, Matt.
P.S. I edited my question so it was more clear and didn't have unnecessary code.
You can use pdb to debug this issue, in the test simply add these two lines to halt execution and begin debugging.
import pdb
pdb.settrace()
Now for good testing practice you want deterministic test results, a test that fails only sometimes is not a good test. I recommend mocking the random function and using data sets that capture the errors you find.

Validating set up and tear down before and after running tests in pytest

I have some resource creation and deletion code that needs to run before and after certain tests, which I've put into a fixture using yield in the usual way. However, before running the tests, I want to verify that the resource creation has happened correctly, and likewise after the deletion, I want to verify that it has happened. I can easily stick asserts into the fixtures themselves, but I'm not sure this is good pytest practice, and I'm concerned that it will make debugging and interpreting the logs harder. Is there a better or canonical way to do validation in pytest?
I had encountered something like this recently - although, I was using unittest instead of pytest.
What I ended up doing was something similar to a method level setup/teardown. That way, future test functions would never be affected by past test functions.
For my use-case, I loaded my test fixtures in this setup function, then ran a couple of basic tests against those tests to ensure validity of fixtures (as part of setup itself). This, I realized, added a bit of time to each test in the class, but ensured that all the fixture data was exactly what I expected it to be (we were loading stuff into a dockerized elasticsearch container). I guess time for running tests is something you can make a judgement call about.

Python nose critical vs non-critical tests

I am still learning testing in Python and cannot find a straight forward answer to this question.
I am using python to drive selenium testing on my application. I run the test-suite with Nose.
Some tests are critical and must pass in order for code check-ins to be acceptable. Others tests will sometimes fail due to factors outside of our control and are not critical.
Is there a standard Nose Plugin that would allow me to specify which test are not critical and give me a report with that break down? Or is there some other standard way of doing this?
You can always use attrib plugin and decorate your critical tests with #attr('critical')
Within your CI, run nose twice, once with -a critical=True (and condition your checkin/deployment on that) and -a critical=False.
First of all, it's probably a bad idea at all to have tests that "are allowed to fail because they are not critical".
You should try to mitigate the influence of external factors as much as possible, and have an environment that allows you to consistently run your tests and especially reproduce any errors you may find.
That said, reality can differ a lot from from theory, so here we go:
Is there a standard Nose Plugin that would allow me to specify which
test are not critical and give me a report with that break down?
No, at list not among the built-in plugins or the third-party plugins mentioned on their website.
Or is there some other standard way of doing this?
For pyunit (and consequently nose), a test can only pass or fail; there is nothing in between.
To get a better overview when looking at test results, I would keep such tests in a separate test suite, independent from the regular "must-pass" tests.
Furthermore, if these unimportant tests are allowed to fail without blocking the check-in, it sounds fair to me that their execution should also be made optional.

Is there a way to detect that Python is running a test?

I want to suppress certain warning messages when Python is running in a test context.
Is there any way to detect this globally in Python?
No, you can't really detect whether or not you're in a test context, or you'd do it with a lot of unnecessary processing. For example: having a state variable in the testing package that you set up when you're running your tests. But then you would include that module (or variable) in all of your modules, which would be far from being elegant. Globals are evil.
The best way to implement filtering output based on the execution context is to use the logging module and make all unnecessary warning messages at a low level (like DEBUG) and ignore them when you run your tests.
Another option would be to add a level for all of the messages you explicitly ignore when running the tests.

Does Django's Unit Testing Raise Warnings to Exceptions?

I am using Django's unit testing apparatus (manage.py test), which is throwing an error and halting when the code generates a warning. This same code when tested with the standard Python unittest module, generates warnings but continues code execution through them.
A little research shows that Python can be set to raise warnings to exceptions, which I suppose would cause the testing framework to think an error had occurred. Unfortunately, the Django documentation on testing is a little light on the definition of an "error", or how to modify the handling of warnings.
So: Is the Django unit testing framework setup to raise warnings to errors by default? Is there some facility in Django for changing this behavior? If not, does anyone have any suggestions for how I can Django to print out the errors but continue code execution? Or have I completely misdiagnosed the problem?
UPDATE:
The test code is halting on warnings thrown by calls on MySQLdb. Those calls are made by a module which throws the same warnings when tested under the Python unittest framework, but does not halt. I'll think about an efficient way of trying to replicate the situation in code terse enough to post.
ANSWER:
A little more research reveals this behavior is related to Django's MySQL backend:
/usr/...django/.../mysql/base.py:
if settings.DEBUG:
...
filterwarnings("error", category=Database.Warning)
When I change settings.py so DEBUG = False, the code throws the warning but does not halt.
I hadn't previously encountered this behavior in Django because my database calls are generated by backend of my own. Since I didn't call the Django backend, I didn't reset the handling of the warnings, and the code continued despite the warnings. The Django test framework surely calls the Django backend -- it does all sorts of things with the database -- and that call would reset the warning handling before my code is called.
Given the updated info, I'm inclined to say that this is the right thing for Django to be doing; MySQL's warnings can indicate any number of things up to and including loss of data (e.g., MySQL will warn and silently truncate if you try to insert a value larger than a column can hold), and that's the sort of thing you'd want to find out about when testing. So probably your best bet is to look at the warnings it's generating and change your code so that it no longer causes those warnings to happen.

Categories

Resources