i have wrote an api wrapper which has around 70 endpoints. To test them all
https://paste.ubuntu.com/p/V6mkK4dSdh/
i have wrote this script without actually usign the unit testing module. Is it a good practice and what are the downsides when compared with unit testing? Really could use some comments.
This is not "testing". This is just checking that "they work".
However, in this context, "work" just means that they don't raise exceptions. That's a requirement, but not even close to "testing". The fact that you endpoint return some result does not mean that the result is correct.
Really testing would mean: When I call f(1, 2) I expect to get 5 as a response. So, you would need multiple tests, manually written, for each endpoint. Of course, that takes time...
Related
How to I can define the "limit context" of my tests?
I say this because of mocks, where my service has many other libs integrated, like: rabbit-mq, redis, etc, and instances from some one classes. Then, the greather part of time I'm writing test code, creating "complex" mocks to the service be possible to test.
Is possible defines this limit? Should be a possible way "no test" theses "expensive" method, and just test the "simple related method", that are simple to input the paramters, like a sum(a, b).
Is obvious that the more test coverage the better, but is many, many time writing some tests with a questionable useful result.
When is using TDD to well defined methods, like the sum(a,b), it is very useful, but when this method will receive instances or use instance from another integrated services, the mocks receive the greather part of attemption, not the "method objective".
Imagine this service/method to test:
class ClientService:
def ok_here_is_the_discount(self, some_args):
# Ok, receive the discount to thi signature, now is possible
# calcula the it, and response to the broker
self.calculate_user_discount_based_on_signature_type(some_args.discount_value)
def calculate_user_discount_based_on_signature_type(self, some_args):
# here will send the "result" to the broker
some_message_borker_publisher(
some_args.value - some_args.discount_signature
)
def request_the_signature_type_discount_in_another_service(self, some_args):
# Ok, I receive the client that has the signature type,
# now is need request what is the value to this signature
# in signature service
some_message_borker_publisher(
queue='siganture.service.what_discount_for_this_signature',
signature_type=some_args.client.singature_type
)
#Ok, this message goes to the broker, and the signature.service receive it
def method_that_receive_messages(self, some_args):
# Here we receives that want to calculate discount
# but just receive the client(dict/class/instance) with the signature type,
# we still dont know the value of theses discount, because when is responsible
# to manage the siganture, is the signature server
if some_args.message_type == 'please_calculate_discount':
self.request_the_signature_type_discount_in_another_service(some_args.client)
if some_args.message_type == 'message_from_signature_discount':
self.ok_here_is_the_discount(some_args.value)
1 - it receive a message 'please_calculate_discount' and call the method
self.request_the_signature_type_discount_in_another_service(some_args.client)
But it still not have the discount value, because this is signature service. (message to the broker)
2 - The supose that the signature server response to 'message_from_signature_discount', then it call the method:
self.ok_here_is_the_discount(some_args.value)
3 - ok, the method ok_here_is_the_discount receive discount and call method
calculate_user_discount_based_on_signature_type()
That now has the values to calculate to the discount and send these value to the broker.
Understand the complexity of these tests (TDD)? I need tests the "method_that_receive_messages" mocking all related nexted actions, or just test the related method, like "calculate_user_discount_based_on_signature_type"?
In this case is better uses a really broker to be possible test it?
Well it is easy to mock things in Python but it still comes with some overhead of course. What I have tended towards over many years now is to set up integration tests with mocks that tests the happy path through the system and then design the system with side effect free functions as much as possible and throw unit tests at them. Then you can start your TDDing with setting up the overall test for the happy path and then unit test particulars in easy to test functions.
It is still useless when the thing behind the mock changes though but it gives a great sense of security when refactoring.
The mock libraries in Python are fine but sometimes it is easier to just write your own and replace the real thing with patch. Pretty sweet actually.
What is actually the difference between using unit tests and normal tests?
By normal tests I mean, using the if statement for example to determine if the calculation is equal to the desired answer if it returns false, we can raise AssertionError
Let's use a simple piece of code as example:
def square(a):
return a*a;
Tests that aim at finding bugs in this function in isolation are unit-tests. This is independent of how you actually implement them: If you just write an if statement like if (square(3) != 9) and raise an AssertionError as you say, then it is a unit-test. If instead you implement the same test using unittest by calling assertEqual, then it is also a unit-test.
In other words, whether or not you use a (so-called) unit-test framework is not a criterion for whether your tests are unit-test or not. In fact, despite the names of the frameworks ('unittest' in Python, 'JUnit' in Java, ...) these frameworks can be used for unit-tests as well as for other tests, like, integration-tests. The names of those frameworks are therefore a bit misleading.
So much for the original question about the difference between unit-tests and normal tests. In one of the comments you make it clearer that you actually want to know what is better: To use or not to use a test framework. And, the answer is clear: Definitely go for a test framework.
After writing only a few tests 'by hand', that is, without a test framework, you will see that there is a lot of duplication in your test code: You compare results with if - that is not so much different. But then, in the success case you write some 'passed' message, with the test name, in the failure case you write a 'failed' messages, again with test name and in this case also with some extra information about what was the actual and what was the expected result. And - did you think about the case that the code under test exits with an exception? So, you also have to wrap it with a try/catch block to make sure unexpected exceptions lead to a 'failed' result, again with some helpful diagnostic information.
And so on... All this and more is taken care of by the test frameworks for you.
From your question, you assume there are two kinds of test types: unit tests and normal test. In fact, there is a lot of types of tests. The unit test is one of them. You can read here other kinds of tests.
If we assume 'normal test' is 'black box test'. It is testing software without source code. For unit tests, there is another part of the software that tests the integrity of the source code itself. What does that mean? You wrote a function that sums two numbers. For the unit test, you are writing another function to check whether your first function works
# Your Function
def my_sum_function(x,y):
return(x+y)
# Your Unit Test Function
def test_sum():
assert my_sum_function([1, 2]) == 3, "Should be 6"
Not only for Python but also for all programming language logic behind the test is the same.
For quality control. If you are coding an app called "app1" with a function f1 and an other function f2 (f2 is using f1) inside you own library
Without unit test:
You can make a change in function f1 test it, it will work in f2 but you will not notice that in you program "app0" you wrote 2 weeks ago it will make you app crash because in your old app "app0" you have a function f3 and f3 is also using f1....
With unit test:
You can catch that type of nasty bug because the test for the function f3 will not pass anymore and you IDE will tell you that when you change the function f1 in "app1"
I hope it make sense (english is not my native language)
I have some pretty fragile code that I want to refactor. It's not very easy to unit test by itself because it interacts with database queries and Django form data.
That in itself is not a big deal. I already have extensive tests that, among other things, end up calling this function and check that results are as expected. But my full test suite takes about 5 minutes and I also don't want to have to fix other outstanding issues while working on this.
What I'd like to do is to run nosetests or nose2 on all my tests, track all test_xxx.py files that called the function of interest and then limit my testing during the refactoring to only that subset of test files.
I plan to use inspect.stack() to do this but was wondering if there is an existing plugin or if someone has done it before. If not, I intend to post whatever I come up with and maybe that will be of use later.
You can simply raise some exception in the function and do one run. All tests that fail do call you function.
I'm writing a Python wrapper for an authenticated RESTful API. I'm writing up my test suite right now (Also first time test-writer here) but have a few questions:
1.a) How can I make a call, but not have to hardcode credentials into the tests since I'll be throwing it on Github?
1.b) I kind of know about mocking, but have no idea how to go about it. Would this allow me to not have to call the actual service? What would be the best way to go about this?
2) What do I test for - Just ensure that my methods are passing certains items in the dictionary?
3) Any best practices I should be following here?
Hey TJ if you can show me an example of one function that you are writing (code under test, not the test code) then I can give you an example test.
Generally though:
1.a You would mock the call to the external api, you are not trying to test whether their authentication mechanism, or your internet connection is working. You are just trying to test that you are calling their api with the correct signature.
1.b Mocking in Python is relatively straight forward. I generally use the mocking library written by Michael Foord. pip install mock will get you started. Then you can do things like
import unittest
from mock import call, patch
from my_module import wrapper_func
class ExternalApiTest(unittest.TestCase):
#patch('my_module.api_func')
def test_external_api_call(self, mocked_api_func):
response = wrapper_func('user', 'pass')
self.assertTrue(mocked_api_func.called)
self.assertEqual(
mocked_api_func.call_args_list,
[call('user', 'pass')]
)
self.assertEqual(mocked_api_func.return_value, response)
In this example we are replacing the api_func inside my_module with a mock object. The mock object records what has been done to it. It's important to remember where to patch. You don't patch the location you imported the object from. You patch it in the location that you will be using it.
You test that your code is doing the correct thing with a given input. Testing pure functions (pure in the functional programming sense) is pretty simple. You assert that given a input a, this function returns output b. It gets a bit trickier when your functions have lots of side effects.
If you are finding it too hard or complicated to test a certain functiob/method it can mean that it's a badly written piece of code. Try breaking it up into testable chunks and rather than passing objects into functions try to pass primitives where possible.
I'm currently writing a set of unit tests for a Python microblogging library, and following advice received here have begun to use mock objects to return data as if from the service (identi.ca in this case).
However, surely by mocking httplib2 - the module I am using to request data - I am tying the unit tests to a specific implementation of my library, and removing the ability for them to function after refactoring (which is obviously one primary benefit of unit testing in the firt place).
Is there a best of both worlds scenario? The only one I can think of is to set up a microblogging server to use only for testing, but this would clearly be a large amount of work.
You are right that if you refactor your library to use something other than httplib2, then your unit tests will break. That isn't such a horrible dependency, since when that time comes it will be a simple matter to change your tests to mock out the new library.
If you want to avoid that, then write a very minimal wrapper around httplib2, and your tests can mock that. Then if you ever shift away from httplib2, you only have to change your wrapper. But notice the number of lines you have to change is the same either way, all that changes is whether they are in "test code" or "non-test code".
Not sure what your problem is. The mock class is part of the tests, conceptually at least. It is ok for the tests to depend on particular behaviour of the mock objects that they inject into the code being tested. Of course the injection itself should be shared across unit tests, so that it is easy to change the mockup implementation.