I've been caching list and retrieve methods for ViewSets in Django REST Framework using drf-extensions https://chibisov.github.io/drf-extensions/docs/#caching.
I wrote a modification to cache the key indefinitely for those list and retrieve methods which invalidates the keys for these methods when an object is modified.
The issue I am trying to solve is cache invalidation when an embedded object is modified. I am not familiar with the typical patterns for cache invalidation for embedded objects.
Ex:
{
"id": 1,
"position": "manager",
"company": "Google",
"user": {
"id": 1,
"email": "johnsmith#example.com",
"first_name": "John",
"last_name": "Doe",
"last_login": "2015-01-05T20:46:15.400Z",
"joined_at": "2014-12-10T19:54:45.588Z",
}
}
A user modifies the user object, ideally the cache key set for the list and retrieve methods for the user ViewSet along with the embedded object ViewSet should both be invalidated.
Any help is appreciated. Thanks.
Related
I have 2 APIs from my existing project. Where One provides the latest blog posts and another one provides sorting details. The 2nd API (sorting) gives blog posts ID and an ordering number, which should be in the 1st,2nd,3rd...n position. If I filter in the first API with that given ID I can get the blog post details.
How can I create a Django REST API from Database? or an API merging from that 2 APIs? Any tutorial or reference which might help me?
Frist API Response:
{
"count": 74,
"next": "https://cms.example.com/api/v2/stories/?page=2",
"previous": null,
"results": [
{
"id": 111,
"meta": {
"type": "blog.CreateStory",
"seo_title": "",
"search_description": "",
"first_published_at": "2022-10-09T07:29:17.029746Z"
},
"title": "A Test Blog Post"
},
{
"id": 105,
"meta": {
"type": "blog.CreateStory",
"seo_title": "",
"search_description": "",
"first_published_at": "2022-10-08T04:45:32.165072Z"
},
"title": "Blog Story 2"
},
2nd API Response
[
{
"featured_item": 1,
"sort_order": 0,
"featured_page": 105
},
{
"featured_item": 1,
"sort_order": 1,
"featured_page": 90
},
Here I want to create another API that will provide more details about sorting for example it will sort like this https://cms.example.com/api/v2/stories/105 and catch Title, Image & Excerpt and If there is no data from Sorting details it will show the first API's response by default.
After searching, I found that you can make API from Database. In setting you need to set the database credentials and then need to create a class inside your models.py and inside class's meta you need to set meta name to db_table and then create serializers.py and views.py as you create REST API.
class SortAPI(models.Model):
featured_item_id = models.IntegerField()
sort_order = models.IntegerField()
title=models.TextField()
first_published_at=models.DateTimeField()
alternative_title= models.TextField()
excerpt=models.TextField()
sub_heading=models.TextField()
news_slug=models.TextField()
img_title=models.TextField()
img_url=models.TextField()
img_width=models.IntegerField()
img_height=models.IntegerField()
class Meta:
db_table = 'view_featured'
I am using the SQLAlchemy 1.4 ORM with postgres13 and Python 3.7.
EDITED FOR CLARITY AND REFINEMENT:
To stand up the project and test it out, these 3 files are working well:
base.py --> setting up SQLAlchemy Engine and database session
models.py --> a User class is defined with a number of fields
inserts.py --> Creating instances of the User class, adding and committing them to database
This all works well provided models.py has a hardcoded Class already defined (such as the User Class).
I have a schema.json file that defines database schema. The file is very large with many nested dictionaries.
The goal is to parse the json file and use the given schema to create Python Classes in models.py (or whichever file) automatically.
An example of schema.json:
"groupings": {
"imaging": {
"owner": { "type": "uuid", "required": true, "index": true },
"tags": { "type": "text", "index": true }
"filename": { "type": "text" },
},
"user": {
"email": { "type": "text", "required": true, "unique": true },
"name": { "type": "text" },
"role": {
"type": "text",
"required": true,
"values": [
"admin",
"customer",
],
"index": true
},
"date_last_logged": { "type": "timestamptz" }
}
},
"auths": {
"boilerplate": {
"owner": ["read", "update", "delete"],
"org_account": [],
"customer": ["create", "read", "update", "delete"]
},
"loggers": {
"owner": [],
"customer": []
}
}
}
The models' Classes need to be created on the fly by parsing the json because the schema might change in the future and manually hardcoding 100+ classes each time doesn't scale.
I have spent time researching this but have not been able to find a completely successful solution. Currently this is how I am handling the parsing and dynamic table creation.
I have this function create_class(table_data) which gets passed an already-parsed dictionary containing all the table names, column names, column constraints. The trouble is, I cannot figure out how to create the table with its constraints. Currently, running this code will commit the table to the database but in terms of columns, it only takes what it inherited from Base (automatically generated PK ID).
All of the column names and constraints written into the constraint_dict are ignored.
The line #db_session.add(MyTableClass) is commented out because it errors with "sqlalchemy.orm.exc.UnmappedInstanceError: Class 'sqlalchemy.orm.decl_api.DeclarativeMeta' is not mapped; was a class (main.MyTableClass) supplied where an instance was required?"
I think this must have something to do with the order of operations - I am creating an instance of a class before the class itself has been committed. I realise this further confuses things, as I'm not actually calling MyTableClass.
def create_class(table_data):
constraint_dict = {'__tablename__': 'myTableName'}
for k, v in table_data.items():
if 'table' in k:
constraint_dict['__tablename__'] = v
else:
constraint_dict[k] = f'Column({v})'
MyTableClass = type('MyTableClass', (Base,), constraint_dict)
Base.metadata.create_all(bind=engine)
#db_session.add(MyTableClass)
db_session.commit()
db_session.close()
return
I'm not quite sure what to take a look at to complete this last step of getting all columns and their constraints committed to the database. Any help is greatly appreciated!
This does not answer your question directly, but rather poses a different strategy. If you expect the json data to change frequently, you could just consider creating a simple model with an id and data column, essentially using postgres as a json document store.
from sqlalchemy.dialects.postgresql import JSONB
class Schema(db.Model):
id = db.Column(db.Integer(), nullable=False, primary_key=True, )
data= db.Column(JSONB)
sqlalchemy: posgres dialect JSONB type
The JSONB data type is preferable to the JSON data type in posgres because the binary representation is more efficient to search through, though JSONB does take slightly longer to insert than JSON. You can read more about the distinction between the JSON and JSONB data types in the posgres docs
This SO post explains how you can use SQLAlchemy to perform json operations in posgres: sqlalchemy filter by json field
It seems that performing a PATCH on an endpoint with a many-to-many relation updates the object but doesn't return back the updated data until the next response vs returning it back in the PATCH response.
Example with original object:
{
"id": 35,
"interests": [
1,
2
],
"personal_statement": "Hello World",
"photo": "",
"resume": "",
"user": 2
}
PATCH request setting interests=[1,2,3,4,5] ... Example response:
{
"id": 35,
"interests": [
1,
2
],
"personal_statement": "Hello World",
"photo": "",
"resume": "",
"user": 2
}
Example of next GET Response:
{
"id": 35,
"interests": [
1,
2,
3,
4,
5
],
"personal_statement": "Hello World",
"photo": "",
"resume": "",
"user": 2
}
This is using Django v1.7.4 and Django REST Framework v2.4.3
My first assumption is that since it's a many to many relation it is saving the parent object first and returning back that data before saving the many to many relation data, but I'm not entirely sure. Any help would be appreciated. Thanks.
EDIT
The issue is actually an open issue on Django REST Framework with some possible solutions. It was being caused by prefetch_related in my ViewSet queryset:
https://github.com/tomchristie/django-rest-framework/issues/1556
https://github.com/tomchristie/django-rest-framework/issues/2442
According to REST API patterns, PATCH request performs a partial update, so Django-Rest-Framework just returns updated data.
May be you want to override PATCH view to return complete serializer data or change your PATCH request to PUT one.
I encountered the same while adding prefetch_related to a many to many field I had. I solved it by simply adding a custom update to my serializer:
def update(self, instance, validated_data):
super(SimpleClientModelResource, self).update(instance, validated_data)
return self.Meta.model.objects.get(id=instance.id)
It seems to me that there should be an automatic way to query the results of a Django Rest Framework call and operate it like a dictionary (or something similar). Am I missing something, or is that not possible?
i.e.,
if the call to http://localhost:8000/api/1/roles/
yields
{"count": 2, "next": null, "previous": null, "results": [{"user": {"username": "smithb", "first_name": "Bob", "last_name": "Smith"}, "role_type": 2, "item": 1}, {"user": {"username": "jjones", "first_name": "Jane", "last_name": "Jones"}, "role_type": 2, "item": 1}]}
I would think something akin to http://localhost:8000/api/1/roles/0/user/username should return smithb.
Does this functionality exist or do I need to build it myself?
It appears to be something you will have to build yourself. That said Django makes this kind of thing very easy. In URLS you can specify parts of the url path to pass to the view. You can catch the values using regex and then pass them into your views function.
Urls:
url(regex=r'^user/api/1/roles/(?P<usernumber>\w{1,50})/(?P<username>\w{1,50})/$', view='views.profile_page')
a request for http://domain/user/api/1/roles/0/username/
View:
def someApiFunction(request, usernumber=None ,username=None):
return HttpResponse(username)
Some additional Resources:
https://docs.djangoproject.com/en/1.7/intro/tutorial03/#writing-more-views
Capturing url parameters in request.GET
I am starting to add Tastypie to a very small Django application I'm developing, and I was wondering if there is a way to send just the numeric id of a resource pointed to by a relationship, instead of the uri where the resource resides.
For instance, using one of the examples provided in the documentation:
The exposed "Entry" resource looks like:
{
"body": "Welcome to my blog!",
"id": "1",
"pub_date": "2011-05-20T00:46:38",
"resource_uri": "/api/v1/entry/1/",
"slug": "first-post",
"title": "First Post",
"user": "/api/v1/user/1/"
}
It has a relationship towards "user" that shows as "user": "/api/v1/user/1/". Is there any way of just making it "user": 1 (integer, if possible) so it looks like the following?
{
"body": "Welcome to my blog!",
"id": "1",
"pub_date": "2011-05-20T00:46:38",
"resource_uri": "/api/v1/entry/1/",
"slug": "first-post",
"title": "First Post",
"user": 1
}
I like the idea or keeping the resource_uri attribute as whole, but when it comes to modeling Sql Relationships, I'd rather have just the id (or a list of numeric ids, if the relationship is "ToMany"). Would it be good idea adding a dehydrate_user method to the EntryResource class to do this? It seems to work, but maybe there's a more generic way of doing it (to avoid having to write a dehydrate method for every relationship)
Thank you in advance
You can try using hydrate dehydrate cycles
def dehydrate(self, bundle):
bundle.data['entry'] = bundle.obj.entry.id
return bundle
def hydrate(self, bundle):
bundle.data['entry'] = Entry.objects.get(id=bundle.data['entry'])
return bundle
BUT I strongly recommend to stick with URI usage since it is how you can adress directly a resource. Hydrate and dehydrate are used for more complexe or virtual resources.