What to test in a simple DRF API? - python

So, I'm not a test expert and sometimes, when using packages like DRF, I think what should I test on the code...
If I write custom functions for some endpoints, I understand I should test this because I've written this code and there are no tests for this... But the DRF codebase is pretty tested.
But if I'm writing a simple API that only extends ModelSerializer and ModelViewSet what should I be testing?
The keys in the JSON serialized?
The relations?
What should I be testing?

Testing your ModelSerializer, Check the request payload against your expected Model fields.
Testing your ModelViewSet, Check the response HTTP_Status_Code against the expected Status codes for your viewsets. You can also test for your response data.
A good resource - https://realpython.com/test-driven-development-of-a-django-restful-api/

Even if you're only using automated features and added absolutely no customization on your serializer and viewset, and it's obvious to you that this part of the code works smoothly, you still need to write tests.
Code tends to get large, and some other person might be extending your code, or you might go back to your code a few months later and not remember how your implementation was. Knowing that tests are passing will inform other people (or yourself in the distant future) that you're code is working without having to read it and dive into the implementation details, which makes your code reliable.
The person using your API might be using it at a service and not even be interested in what framework or language you used for implementation, but only wants to be sure that the features he/she requires work properly. How can we ensure this? One way is to write tests and pass them.
That's why it's very important to write complete and reliable tests so people can safely use or extend your code knowing that the tests are passing and everything is OK.

Related

Besides automatic documentation, what's the rationale of providing a response model for FastAPI endpoints?

The question is basically in the title: Does providing a bespoke response model serve any further purpose besides clean and intuitive documentation? What's the purpose of defining all these response models for all the endpoints rather than just leaving it empty?
I've started working with FastAPI recently, and I really like it. I'm using FastAPI with a MongoDB backend. I've followed the following approach:
Create a router for a given endpoint-category
Write the endpoint with the decorator etc. This involves the relevant query and defining the desired output of the query.
Then, test and trial everything.
Usually, prior to finalising an endpoint, I would set the response_model in the decorator to something generic, like List (imported from typing). This would look something like this:
#MyRouter.get(
'/the_thing/{the_id}',
response_description="Returns the thing for target id",
response_model=List,
response_model_exclude_unset=True
)
In the swagger-ui documentation, this will result in an uninformative response-model, which looks like this:
So, I end up defining a response-model, which corresponds to the fields I'm returning in my query in the endpoint function; something like this:
class the_thing_out(BaseModel):
id : int
name : str | None
job : str | None
And then, I modify the following: response_model=List[the_thing_out]. This will give a preview of what I can expect to be returned from a given call from within the swagger ui documentation.
Well, to be fair, having an automatically generated OpenAPI-compliant description of your interface is very valuable in and of itself.
Other than that, there is the benefit of data validation in the broader sense, i.e. ensuring that the data that is actually sent to the client conforms to a pre-defined schema. This is why Pydantic is so powerful and FastAPI just utilizes its power to that end.
You define a schema with Pydantic, set it as your response_model and then never have to worry about wrong types or unexpected values or what have you accidentally being introduced in your response data.* If you try to return some invalid data from your route, you'll get an error, instead of the client potentially silently receiving garbage that might mess up the logic on its end.
Now, could you achieve the same thing by just manually instantiating your Pydantic model with the data you want to send yourself first, then generating the JSON and packaging that in an adequate HTTP response?
Sure. But that is just extra steps you have to make for each route. And if you do that three, four, five times, you'll probably come up with an idea to factor out that model instantiation, etc. in a function that is more or less generic over any Pydantic model and data you throw at it... and oh, look! You implemented your own version of the response_model logic. 😉
Now, all this becomes more and more important the more complex your schemas get. Obviously, if all your route does is return something like
{"exists": 1}
then neither validation nor documentation is all that worthwhile. But I would argue it's usually better to prepare in advance for potential growth of whatever application you are developing.
Since you are using MongoDB in the back, I would argue this becomes even more important. I know, people say that it is one of the "perks" of MongoDB that you need no schema for the data you throw at it, but as soon as you provide an endpoint for clients, it would be nice to at least broadly define what the data coming from that endpoint can look like. And once you have that "contract", you just need a way to safeguard yourself against messing up, which is where the aforementioned model validation comes in.
Hope this helps.
* This rests on two assumptions of course: 1) You took great care in defining your schema (incl. validation) and 2) Pydantic works as expected.

Testing Flask methods which return a render_template

I have looked into a bunch of different options, but since Python is really as dynamic as we want it to be, I wanted to ask here.
When doing unit tests, it allows me to mock data and test functionality for expected and unexpected results. I noticed that there are a large number of functions in the app i am working on which are Flask Routes, which return render_templated data. Not bad, I am used to that. The issue here is that I wanted to mock data and test those endpoints. I saw there are some options though. When looking on github, i noticed a no-longer-maintained tool flask-testing is no longer maintained it seems. Test Flask render_template() context
I was thinking this would be useful to touch the context to see the resulting data, which is more important than the rendered template to me. That way, I could control inputs and test the resulting dataset.
Is there something that is in unittest or other that is up to date? If this doesn't work, I guess I could just parse the HTML if i know where the data is for it, but it seems a bit bulky to me to just ensure the data passed into render_template is right or wrong.
What is the way people do this normally? parse resulting html for the data you need, or is there a way to mock and obtain the context of the result in order to verify. I have been looking through base docs for unittest and have yet to find anything.
I just parse the HTML. This makes sure both the data is there (and correct) and the page gets rendered correctly.
If you have a complex function, I'd just extract it out of the view function and unit test it directly.
I am not convinced that mocking would be a great idea here, so I do not mention that you could mock "render_template" itself :-)

How and where do I delete rows in Flask - SQLAlchemy

I'm building a project using flask and flask-SQLAlchemy (with sqlite3 and with python 3.8). I have models.py which holds all the tables of the db, and I want to have static method which deletes rows in Articles table depends on one of their attribute.
I thought about writing this funciton in the models class and wanted to ask if that's fine.
The function looks like:
def update_articles():
all_articles = Articles.query.all()
for article in all_articles:
if needs_to_be_deleted(article):
db.session.delete(article)
db.session.commit()
Is that funciton fine (don't mind the needs_to_be_deleted(article) thing). Is the way I delete the article good? and is the place for this function can be at the models.py file?
As per my experience its all good here. As a general rule you should design your application as:
Models should do the most powerful stuff
Views should carry out only logical work
Templates should be dumb. They should just render things.
Now by saying the above i mean to say all the stuff relating to the operations with database should be in the models. Just like you are doing. Most of the time all the transactions with your database are simple ones and can be written with a couple of lines with ORM. But sometimes if you feel that something needs to be done over and over again or you feel like having a method for doing something big. Then its better to have that in your models only. The method you mentioned should be a part of your Article model. Other things that can be there in your model could be operations like sending emails to users or some other routine tasks.
As far as your views are concerned. They should carry out all the logical stuff. Your business logic is basically implemented by the views.
Lastly templates should do the work of rendering things whatever is fed to them by the views.
Obviously there are no such hard and fast rule. There can be exceptions to the above. Or someone could find any other way of doing things better rather than that i mentioned. For that you need to use your own conscience and no one can teach you that. It comes with experience.
For your reference please go through the following articles:
https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world
https://flask.palletsprojects.com/en/1.1.x/tutorial/
Also you might be using this but in case your aren't:
https://flask-marshmallow.readthedocs.io/en/latest/
Just one small suggestion too. though i am not clear about your requirements. But it would be better if you could commit after the for loop i.e at the very end of the method. A request should generally have only one commit.

Unit-testing tightly-coupled models in Django

I'm new to both django and unit-testing, and I'm trying to build unit-tests for my models but have been having some difficulty.
I have several models working closely together:
Resource which will maintain a file resource
MetadataField which represents a metadata field that can be added to resources, corresponds to a table full of fields
MetadataValue Matches MetadataField IDs with Resource IDs and a corresponding value, this is an intermediary table for the Resource - MetadataField many-to-many relationship
MetadataSchema represents a schema consisting of many MetadataFields. Each Resource is assigned a MetadataSchema which controls which MetadataFields it is represented by
Relationships:
Resource - MetadataField : Many-to-Many through MetadataValue
MetadataValue - MetadataSchema : Many-to-Many
Resource - MetadataSchema : One-to-Many
I'm not sure how to write tests to deal with these models. The model testing in the Test Driven Django tutorial seems to mostly cover initializing the objects and verifying attributes. If I do any setting up of these objects though it requires the use of all the others, so the tests will all be dependent on code that they're not meant to be testing.
e.g. if I wish to create a resource, I should also be assigning it a metadata schema and values for fields in that schema.
I've looked around for good examples of unit tested models in django but haven't been able to find anything (the django website doesn't seem to have unittests, and these projects all either have poor/missing testing or in a couple cases have good testing but almost no models used.
Here are the possible approaches I see:
Doing a lot of Mocking, to ensure that I am only ever testing one class, and keep the unit tests on the models very simple, testing only their methods/attributes but not that the relationships are functioning correctly. Then rely on higher level integration tests to pick up any problem in the relationships etc.
Design unittests that DO rely on other functionality, and accept that a break in one function will break more than one test, provided it remains easy to see where the fault occurred. So i would perhaps have a method testing whether I can successfully add a MetadataValue to a resource, which would require setting up at least one MetadataSchema and Resource. I could then use a try - except block to ensure that if the test fails before the assertions dealing with what I'm actually meant to be testing, it gives a specific error message suggesting the fault lies elsewhere. This way I could quickly scan multiple failed test messages to find the real culprit. It wouldn't be possible to do this separation reliably in every test though
I'm having a hard time getting my head round this, so I don't know if this all makes sense, but if there are best practices for this sort of situation please point me to them! Thanks
You can use django fixtures to load data for testing, this can be very time consuming and hard to maintain if your models change a lot.
I suggest you to use a library like Factory Boy, which allows you to create objects on demand for your tests when you need them. You can set as many factories as you want, you can see some examples here and here you can also see some examples on mocking with the mocker library and a lot of tips on testing django apps.
For me the purpose of Unit testing is to separate UNITS of code to test ONLY them, not worrying about all their dependencies. If I understand your idea correctly, You want to create something that is more an integration test (the relationship between two or more models) which is also a very helpful, but still a different, layer of testing :)
To test separate modules, especially when they use a lot of code around, I prefer to mock the dependencies. Google returned this as a first option for you Python mocks (I guess there are plenty of them out there).
The other thing is if there are TOO MANY dependencies You have to mock it probably means You have to rethink your architecture because of tight coupling :)
Good luck!
Use fixtures, they let you load model data without writing the code.

How to differentiate model classes between webapp and unittest

I've started looking in to unittest when using google app engine. And it seems to bit tricky from what I've read. Since you can't (and not suppose to) run your test's against the datastore.
I've written an abstract class to emulate a datastore model class. And it quite works pretty nice returning mockup data on get, all, fetch and so on (only tried on a small scale) returning dbModel like results.
The one thing I haven't found a solution I'm satisfied with is how to differentiate which model class to use. I want to use the mock-ups for unit tests and the actual db.Model for when webapp is running.
My current solution looks like this in my .py containing all db.Models:
if 'SERVER_SOFTWARE' in os.environ:
class dbTest(db.Model):
content = db.StringProperty()
comments = db.ListProperty(str)
else:
class dbTest(Abstract):
content = 'Test'
comments = ['test1', 'test2']
And it kinda feels like it could break any minute. Is this the way to go or could one combine these as one class and if the db.Model is invoked properly use that else the mockup?
Check out gaetestbed (docs). It stubs out the datastore (and all the other services like memcache) and makes testing from the command line very easy. It ensures a clean environment before every test runs.
I personally think it is nicer than the other solutions I have seen.
Instead of messing your models.py I would go with gaeunit.
I've used it with success in a couple of projects and the features I like are:
Just one file to add to your project (gaeunit.py) and you are almost done
Gaeunit isolates the test datastore from the development store (i.e. tests don't pollute your development db)
Since you can't (and not suppose to)
run your test's against the datastore.
This is not true. You can and should use the local datastore implementation as a test harness - there's no reason to waste your time creating mocks for every datastore behaviour. You can use a tool such as noseGAE or gaeunit, as suggested by other posters, but if you want to set it up yourself, see this snippet.
There's more than one problem you're trying to address here...
First, for running tests with GAE emulation you can take a look at gaeunit, which I like best. If you don't want to run them from the browser then you can take a look at noseGAE (part of nose). This should give you command-line testing.
Second, regarding your comment about about 'creating an overhead of dependencies' it sounds like you're searching for a good unit testing and mocking framework. These will let you mock out the database for tests which don't need to hit it. Try mox and mockito for python.

Categories

Resources