I am using Django's unit testing apparatus (manage.py test), which is throwing an error and halting when the code generates a warning. This same code when tested with the standard Python unittest module, generates warnings but continues code execution through them.
A little research shows that Python can be set to raise warnings to exceptions, which I suppose would cause the testing framework to think an error had occurred. Unfortunately, the Django documentation on testing is a little light on the definition of an "error", or how to modify the handling of warnings.
So: Is the Django unit testing framework setup to raise warnings to errors by default? Is there some facility in Django for changing this behavior? If not, does anyone have any suggestions for how I can Django to print out the errors but continue code execution? Or have I completely misdiagnosed the problem?
UPDATE:
The test code is halting on warnings thrown by calls on MySQLdb. Those calls are made by a module which throws the same warnings when tested under the Python unittest framework, but does not halt. I'll think about an efficient way of trying to replicate the situation in code terse enough to post.
ANSWER:
A little more research reveals this behavior is related to Django's MySQL backend:
/usr/...django/.../mysql/base.py:
if settings.DEBUG:
...
filterwarnings("error", category=Database.Warning)
When I change settings.py so DEBUG = False, the code throws the warning but does not halt.
I hadn't previously encountered this behavior in Django because my database calls are generated by backend of my own. Since I didn't call the Django backend, I didn't reset the handling of the warnings, and the code continued despite the warnings. The Django test framework surely calls the Django backend -- it does all sorts of things with the database -- and that call would reset the warning handling before my code is called.
Given the updated info, I'm inclined to say that this is the right thing for Django to be doing; MySQL's warnings can indicate any number of things up to and including loss of data (e.g., MySQL will warn and silently truncate if you try to insert a value larger than a column can hold), and that's the sort of thing you'd want to find out about when testing. So probably your best bet is to look at the warnings it's generating and change your code so that it no longer causes those warnings to happen.
Related
When running some code I had written I was surprised to see that a function threw an exception that was not caught and so my code crashed. The thing is, Spyder never informed me that the function could even throw an exception.
Is there a setting somewhere that I have to turn on to be informed of this?
Python isn't Java. Your IDE will not warn you about uncaught exceptions, because Python's dynamic nature means that exceptions could be raised (or caught) almost anywhere and there's no amount of static analysis that'll work for every -- or even most -- cases. Often when you develop Flask or Django applications, you actually want exceptions to float all the way up to the "root" exception handlers.
Bottom line: No Python IDE is going to do this, and it would not be expected or considered generally desirable in Python programming.
I'm trying to write a highly-reliable piece of code in Python. The most common issue I run into is that after running for a while, some edge case will occur and raise an exception that I haven't handled. This most often happens when using external libraries - without reading the source code, I don't know of an easy way to get a list of all exceptions that might be raised when using a specific library function, so it's hard to know what to handle. I understand that it's bad practice to use catch-all handlers, so I haven't been doing that.
Is there a good solution for this? It seems like it should be possible for a static analysis tool to check that all exceptions are handled, but I haven't found one. Does this exist? If not, why? (is it impossible? a bad idea? etc) I especially would like it to analyze imported code for the reason explained above.
"it's bad practice to use catch-all handlers" to ignore exceptions:
Our web service has an except which wraps the main loop.
except:
log_exception()
attempt_recovery()
This is good, as it notifies us (necessary) of the unexpected error and then tries to recover (not necessary). We can then go look at those logs and figure out what went wrong so we can prevent it from hitting our general exception again.
This is what you want to avoid:
except:
pass
Because it ignores the error... then you don't know an error happened and your data may be corrupted/invalid/gone/stolen by bears. Your server may be up/down/on fire. We have no idea because we ignored the exception.
Python doesn't require registering of what exceptions might be thrown, so there are no checks for all exceptions a module might throw, but most will give you some idea of what you should be ready to handle in the docs. Depending on your service, when it gets an unhandled exception, you might want to:
Log it and crash
Log it and attempt to continue
Log it and restart
Notice a trend? The action changes, but you never want to ignore it.
Great question.
You can try approaching the problem statically (by, for instance, introducing a custom flake8 rule?), but, I think, it's a problem of testing and test coverage scope. I would approach the problem by adding more "negative path"/"error handling" checks/tests to the places where third-party packages are used, introducing mock side effects whenever needed and monitoring coverage reports at the same time.
I would also look into Mutation Testing idea (check out Cosmic Ray Python package). I am not sure if mutants can be configured to also throw exceptions, but see if it could be helpful.
Instead of trying to handle all exceptions, which is very hard as you described, why don't you catch all, but exclude some, for example the KeyboardInterrupt you mentioned in the comments?
This may help
I have an application where I introduced the standard logging library, I just set it up to WARNING.
When running unittesting I would like to avoid that those errors and warnings are appearing (just because I am making them intentionally!), but I would like to keep the verbose from unittesting.
Is there any way I can have the standard application with a logging level (WARNING) and during testing in a different one (none or CRITICAL?)
For example, I want my application in normal mode of operation to show the following:
=====
Application started
ERROR = input file is wrong
=====
However, when running my unittesting I do not want any of those outputs to appear, as I will actually make the app fail to check the correct error tracking, so it will be redundant to show the error messages and actually will complicate detecting the problems.
Looking to stackoverflow I found some similar problems, but not fixing my issue:
The problem is with print, not with logging
Is there a way to suppress printing that is done within a unit test?
Just eliminating part of test verbosity
Turn some print off in python unittest
Any idea/help?
I'm still not 100% sure- I think what you want is to have log statements in your app that get suppressed during testing.
I would use Nosetests for this- it suppresses all stdout for passing tests and prints it for failing ones, which is just about perfect for your use case in my opinion.
A less good solution, just in case I don't understand you, is to define a test case class that all of your tests inherit from- it can have extra test methods or whatever (it should inherit from unittest.TestCase itself). The key though is that you can change the logging level to a higher/lower level in that file that only gets imported during testing, which will allow you to have special logging behavior during tests.
The behavior of nose though is the best- it still shows output on failing tests and captures print statements as well.
I have a decent sized code in python which I changed a bit and now I see that when executing the code, the script does not bail on language errors like missing function definitions.
I did not think it was possible to continue running a script with missing function definition. I used a function in one place with the intention of actually defining the function before running but forgot about it and ran the code. To my surprise I just got the following line printed to the console --
Worker instance has no attribute update_parse_status
Where Worker is the class I was supposed to add the update_parse_status() call to.
I realized that I had a try and a generic catch all exception handler around the code in question like this --
try:
Worker.update_parse_status()
except Exception as e:
print e
So python just throws a AttributeError and I was unknowingly catching it. It taught me a valuable lesson to not have catch all exception handlers. Also coming from a compiled language is there a better way to handle this? I mean, can I some how make sure python just exits on language errors? It will be very helpful to atleast weed out the obvious bugs (though I understand that bad code got me into this situation).
In python all names are looked up at runtime. Therefor, what you refer to as "language" errors are no different than runtime errors.
This makes python different than many other languages. One of the advantages of that fact is that you can easily customize the way names are looked up (e.g. by overriding class's __getattr__) for example.
You can make use of analyzers (e.g. pyflakes is pretty cool, but there are many others) to detect some of the errors before running your program, but no tool would be able to detect all.
You already answered your own question.
Don't catch exceptions if you don't need too ...
except Exception is dangerous
if you must catch the exception but use some logging mechanism to track it down.
Finally, make sure python just exits on errors: don't catch exceptions where it is not needed to guarantee the flow of the program.
Python, being an interpreter, does this. The best way to counter this is unit testing. The unittest module can help here, and setuptools can run these tests for you. You can package these with your source distribution, as well.
Of course, as you discovered, never use the catch-all exception. That also masks errors that unit tests could catch. You can also run your code under a debugger when developing it so this can be tracked more easily as well. Using the catch-all also makes debugging harder (or even impossible).
You can also do static analysys, using tools such as pylint or pyflakes.
I'm writing tests for a Django application that uses an external data source. Obviously, I'm using fake data to test all the inner workings of my class but I'd like to have a couple of tests for the actual fetcher as well. One of these will need to verify that the external source is still sending the data in the format my application expects, which will mean making the request to retrieve that information in the test.
Obviously, I don't want our CI to come down when there is a network problem or the data provider has a spot of downtime. In this case, I would like to throw a warning that skips the rest of that test method and doesn't contribute to an overall failure. This way, if the data arrives successfully I can check it for consistency (failing if there is a problem) but if the data cannot be fetched it logs a warning so I (or another developer) know to quickly check that the data source is ok.
Basically, I would like to test my external source without being dependant on it!
Django's test suite uses Python's unittest module (at least, that's how I'm using it) which looks useful, given that the documentation for it describes Skipping tests and expected failures. This feature is apparently 'new in version 2.7', which explains why I can't get it to work - I've checked the version of unittest I have installed on from the console and it appears to be 1.63!
I can't find a later version of unittest in pypi so I'm wondering where I can get hold of the unittest version described in that document and whether it will work with Django (1.2).
I'm obviously open to recommendations / discussion on whether or not this is the best approach to my problem :)
[EDIT - additional information / clarification]
As I said, I'm obviously mocking the dependancy and doing my tests on that. However, I would also like to be able to check that the external resource (which typically is going to be an API) still matches my expected format, without bringing down CI if there is a network problem or their server is temporarily down. I basically just want to check the consistency of the resource.
Consider the following case...
If you have written a Twitter application, you will have tests for all your application's methods and behaviours - these will use fake Twitter data. This gives you a complete, self-contained set of tests for your application. The problem is that this doesn't actually check that the application works because your application inherently depends on the consistency of Twitter's API. If Twitter were to change an API call (perhaps change the URL, the parameters or the response) the application would stop working even though the unit tests would still pass. (Or perhaps if they were to completely switch off basic authentication!)
My use case is simpler - I have a single xml resource that is used to import information. I have faked the resource and tested my import code but I would like to have a test that checks the format of that xml resource has not changed.
My question is about skipping tests in Django's test runner so I can throw a warning if the resource is unavailable without the tests failing, specifically getting a version of Python's unittest module that supports this behaviour. I've given this much background information to allow anyone with experience in this area to offer alternative suggestions.
Apologies for the lengthy question, I'm aware most people won't read this now.
I've 'bolded' the important bits to make it easier to read.
I created a separate answer since your edit invalidated my last answer.
I assume you're running on Python version 2.6 - I believe the changes that you're looking for in unittest are available in Python version 2.7. Since unittest is in the standard library, updating to Python 2.7 should make those changes available to you. Is that an option that will work for you?
One other thing that I might suggest is to maybe separate the "external source format verification" test(s) into a separate test suite and run that separately from the rest of your unit tests. That way your core unit tests are still fast and you don't have to worry about the external dependencies breaking your main test suites. If you're using Hudson, it should be fairly easy to create a separate job that will handle those tests for you. Just a suggestion.
The new features in unittest in 2.7 have been backported to 2.6 as unittest2. You can just pip install and substitute unittest2 for unittest and your tests will work as thyey did plus you get the new features without upgrading to 2.7.
What are you trying to test? The code in your Django application(s) or the dependency? Can you just Mock whatever that external dependency is? If you're just wanting to test your Django application, then I would say Mock the external dependency, so your tests are not dependent upon that external resource's availability.
If you can post some code of your "actual fetcher" maybe you will get some tips on how you could use mocks.