Python unittest framework: Test description - python

What's the best practice to provide the description of a test script in python?
Obviously I can put the comments below the test case, but wanted to know if there's any standard practice (any methods I should write) to provide the description of the test case (detailed information on what the test case is supposed to do)?
Is this how you would put the test description?:
Class TestFoo:
def testfoo1():
"""
test description:
step1:
step2:
"""
Any suggestions/references would be appreciated.

The test method docstring is the standard place to put that information If you are using python's unittest module. unittest will use that docstring to format output, etc.

In the unittest framework, you have the
shortDecription method:
shortDescription()
Returns a description of the test, or None if no description has been provided. The default implementation of this method returns the first line of the test method’s docstring, if available.
So, in fact, using the method docstring is a fine place. You may have to inherit from TestCase in your class declaration for the runner to work like that, though.
For best practice: name the test case (class) and the test methods in a concise but useful fashion which is sufficient for developers to have a high level idea of where something is going wrong, should that particular test fail. A prerequisite to this is each test method should only be testing one thing, rather than asserting on a whole bunch of different things.
With sensible test names, usually a docstring with "detailed information on what the test case is supposed to do" would not be necessary. If you have existing large tests which check many things, you may want to split them up into a bunch of smaller tests which each assert on one and only one individual thing.

The best way would be for the name of the class to be descriptive enough for a description not to be needed.
Barring that, the docstring is the best approach.

Related

Cover only directly called code from test function in pytest coverage

I need to cover only code that is called directly from test function, every nested method call must be marked as missed. This must help me ensure that every unit/method has his own test.
Example: test function calls method A and method A calls method B inside. After that I want to have A method marked as covered and method B marked as missed, as it was not directly called from test function.
Does anybody know plugin or has any idea how to do that?
I have tried googling and reading coverage docs, the only thing slightly related is dynamic contexts, but they show which methods called the line. This differs from what I want, because in this case I must check every line caller method. I just want this lines(that are not called directly) to be marked red.
Coverage.py doesn't have a way to do this. There's discussion about marking which product functions are intended to be covered by each test as a possible future feature: https://github.com/nedbat/coveragepy/issues/696

Parameterized skipping for Python unittests: Best practices?

I have the following scenario:
I have a list of "dangerous patterns"; each is a string that contains dangerous characters, such as "%s with embedded ' single quote", "%s with embedded \t horizontal tab" and similar (it's about a dozen patterns).
Each test is to be run
once with a vanilla pattern that does not inject dangerous characters (i.e. "%s"), and
once for each dangerous pattern.
If the vanilla test fails, we skip the dangerous pattern tests on the assumption that they don't make much sense if the vanilla case fails.
Various other constraints:
Stick with the batteries included in Python as far as possible (i.e. unittest is hopefully enough, even if nose would work).
I'd like to keep the contract of unittest.TestCase as far as possible. I.e. the solution should not affect test discovery (which might find everything that starts with test, but then there's also runTest which may be overridden in the construction, and more variation).
I tried a few solutions, and I am not happy with any of them:
Writing an abstract class causes unittest to try and run it as a test case (because it quacks like a test case). This can be worked around, but the code is getting ugly fast. Also, a whole lot of functions needs to be overridden, and for several of them the documentation is a bit unclear about what properties need to be implemented in the base class. Plus test discovery would have to be replicated, which means duplicating code from inside unittest.
Writing a function that executes the tests as SubTests, to be called from each test function. Requires boilerplate in every test function, and gives just a single test result for the entire series of tests.
Write a decorator. Avoids the test case discovery problem of the abstract class approach, but has all the other problems.
Write a decorator that takes a TestCase and returns a TestSuite. This worked best so far, but I don't know whether I can add the same TestCase object multiple times. Or whether TestCase objects can be copied or not (I have control over all of them but I don't know what the base class does or expects in this regard).
What's the best approach?

Test part of complex structure with unittest and mocks

What will be the best way to test following:
We have a large complex class that we'll call ValueSetter which accepts string, gets some data from it and sets this data to several variables like
message_title, message_content, message_number
To perform this it uses another one class called Rule where are rules for particular case described with regular expressions.
What is needed:
Because in each Rule there are about 5 cases to match, we ant to test each of them separately.
So far we need only to assert that particular Rule returns correct string in each case. What is the best way in this situation ?
Try to test Rule and ValueSetter each in their own Test. Test that the Rule really does what you think in the 5 cases you describe in your question. Then when you test your ValueSetter just assume that Rule does what you think and set for example message_title, message_content and message_number directly. So you inject the information in a way that Rule should have done.
This is what you usually do in a unittest. In order to test if everything is working in conjunction you usually would do a functional test that tests the application from a higher/user level.
If you cannot construct a ValueSetter without using a Rule then just create a new class for the test that inherits from ValueSetter and overwrite the __init__ method. In this way you will be able to get a 'blank' object and set the member variables as you expect them to be directly.

How much should one TestCase cover?

I've never written a proper test until now, only small programs that I would dispose of after the test succeeded. I was looking through Python's unittest module and tutorials around the web, but something's not clear to me.
How much should one TestCase cover? I've seen examples on the web that have TestCase classes with only one method, as well as classes that test almost the entire available functionality.
In my case, I'm trying to write a test for a simple bloom filter. How do you think I should organize my test cases?
To put it simple: one unit test should cover single feature of your program. That's all there is to say. That's why they're called unit tests.
Of course, what we understand by feature may vary. Think about smallest parts of your program that might break or not work as expected. Think about business requirements of your code. Those are parts that you want each to be covered by dedicated unit test.
Usually, unit tests are small, isolated and atomic. They should be easy to understand, they should fail/pass independently from one another, and should execute fast. Fairly good indication of proper unit tests is single assertion - if you find yourself writing more, you probably test too much and it's a sign you need more than one test for given feature. However, this is not a strict rule - the more complex code is involved, the more complex unit tests tend to be.
When writing tests, it's easy to split your code functionality and test those separated parts (this should give you the idea of atomicity of your tests). For example, if you have a method that verifies input then calls a service and finally returns result, you usually want to have all three (verify, call, return) steps covered.
I would create one TestCase with several test methods. A bloom filter has simple semantics, so only one TestCase. I usually add a TestCase per feature.

Unittest in Django. What is relationship between TestCase class and method

I am doing some unit testing stuff in Django. What is the relationship between TestCase class and the actual method in this class? What is the best practice for organizing these stuff?
For example, I have
class Test(TestCase):
def __init__(self):
...
def testTestA(self):
#test code
def testTestB(self):
#test code
If I organize in this way:
class Test1(TestCase):
def __init__(self):
...
def testTestA(self):
#test code
class Test2(TestCase):
def __init__(self):
...
def testTestB(self):
...
Which is better and what is the difference?
Thanks!
You rarely write __init__ for a TestCase. So strike that from your mental model of unit testing.
You sometimes write a setUp and tearDown. Django automates much of this, however, and you often merely provide a static fixtures= variable that's used to populate the test database.
More fundamentally, what's a test case?
A test case is a "fixture" -- a configuration of a unit under test -- that you can then exercise. Ideally each TestCase has a setUp method that creates one fixture. Each method will perform a manipulation on that fixture and assert that the manipulation worked.
However. There's nothing dogmatic about that.
In many cases -- particularly when exercising Django models -- where there just aren't that many interesting manipulations.
If you don't override save in a model, you don't really need to do CRUD testing. You can (and should) trust the ORM. [If you don't trust it, then get a new framework that you do trust.]
If you have a few properties in a models class, you might not want to create a distinct method to test each property. You might want to simply test them sequentially in a single method of a single TestCase.
If, OTOH, you have really complex class with lots of state changes, you will need a distinct TestCase to configure an object is one state, manipulate it into another state and assert that the changes all behaved correctly.
View Functions, since they aren't -- technically -- stateful, don't match the Unit Test philosophy perfectly. When doing setUp to create a unit in a known state, you're using the client interface to step through some interactions to create a session in a known state. Once the session has reached as desired state, then your various test methods will exercise that session, and assert that things worked.
Summary
Think of TestCase as a "Setup" or "Context" in which tests will be run.
Think of each method as "when_X_should_Y" statement. Some folks suggest that kind of name ("test_when_x_should_y") So the method will perform "X" and assert that "Y" was the response.
It's kind of hard to answer this question regarding the proper organization of cases A and B and test methods 1, 2 and 3...
However splitting the tests to test cases serves two major purposes:
1) Organizing the tests around some logical groups, such as CustomerViewTests, OrdersAggregationTests, etc.
2) Sharing the same setUp() and tearDown() methods, for tests which require the same, well, setup and tear down.
More information and examples can be found at unitTest documentation.

Categories

Resources