I have a model Person where I do a lot of queries of the same type. For example, I may ask a lot of times the "profile picture" in the same page.
As you can see in my code, I've implemented a "sort" of cache: put the result in an array, and later, if there's a key in this array, return the result.
class Personne(BaseModel):
def __init__(self, *args, **kwargs):
# Mise en place de cache :
self.cache = {}
super(Personne, self).__init__(*args, **kwargs)
def url_profile_picture(self):
# gestion du cache :
retour = self.cache.get('profile_picture')
if retour:
return retour
a = PersonnePhoto.objects.filter(personne=self,
photo_type=PersonnePhoto.PHOTO_PROFIL)
if len(a):
a = reverse('url_public', kwargs={'path': a[0].photo})
else:
a = staticfiles.static('img/no-picture-yet.png')
self.cache['photo_profil'] = a
return a
I was wondering (because I'm a Django newbie) if it's useful of if Django has already a caching system on its own. What I mean is: will my query PersonnePhoto.objects.filter(...) access the database all the time -> I definitely need my own cache, or will it be cached by Django -> useless to write my own caching method?
from django.core.cache import cache
In your model, i suggest something like this:
def url_profile_picture(self):
# gestion du cache :
retour = cache.get('profile_picture_%s' % self.pk)
if retour:
return retour
else:
a = PersonnePhoto.objects.filter(personne=self,
photo_type=PersonnePhoto.PHOTO_PROFIL)
if len(a):
a = reverse('url_public', kwargs={'path': a[0].photo})
else:
a = staticfiles.static('img/no-picture-yet.png')
cache.set('profile_picture_%s' % self.pk, a)
return a
can read up more on django cache here: https://docs.djangoproject.com/en/1.9/topics/cache/
edit: then in your profile area, you can clear the cache on upload of an image to get it to display faster.
I think you are looking for the cached_property decorator. It behaves exactly the same like the solution you rolled out for yourself (with the distinction that url_profile_picture is now a property):
from django.utils.functional import cached_property
class Personne(BaseModel):
#cached_property
def url_profile_picture(self):
a = PersonnePhoto.objects.filter(personne=self,
photo_type=PersonnePhoto.PHOTO_PROFIL)
if len(a):
a = reverse('url_public', kwargs={'path': a[0].photo})
else:
a = staticfiles.static('img/no-picture-yet.png')
return a
Related
I am having a weird bug that seems related to Djangos caching.
I have a 3-step registration process:
insert personal data
insert company data
summary view and submit all data for registration
If person A walks through the process to the summary part but does not submit the form and
person B does the same, person B gets the data of person A in the summary view.
The data gets stored in a Storage object which carries the data through each step. Every new registration instanciates a new Storage object (at least it should).
While debugging I've found that Django does not call any method in the corresponding views when the cache is already warmed up (by another running registration) and I guess that's why there is no new Storage instance. Hence the cross-polution of data.
Now I'm perfectly aware that I can decorate the method with #never_cache() (which it already was) but that doesn't do the trick.
I've also found that the #never_cache decorator does not work properly prior to Django 1.9(?) as it misses some headers.
One solution that I've found was to set these headers myself with #cache_control(max_age=0, no_cache=True, no_store=True, must_revalidate=True). But that also doesn't work.
So how can I properly disable caching for these methods?
Here is some relevant code:
# views.py
def _request_storage(request, **kwargs):
try:
return getattr(request, '_registration_storage')
except AttributeError:
from .storage import Storage
storage = Storage(request, 'registration')
setattr(request, '_registration_storage', storage)
return storage
...
# Route that gets called by clicking the "register" button
#secure_required
#cache_control(max_age=0, no_cache=True, no_store=True, must_revalidate=True)
# also does not work with #never_cache()
def registration_start(request, **kwargs):
storage = _request_storage(request)
storage.clear()
storage.store_data('is_authenticated', request.user.is_authenticated())
storage.store_data('user_pk', request.user.pk if request.user.is_authenticated() else None)
return HttpResponseRedirect(reverse_i18n('registration_package', kwargs={'pk': 6}))
def registration_package(request, pk=None, **kwargs):
"""stores the package_pk in storage"""
...
def registration_personal(request, pk=None, **kwargs):
from .forms import PersonalForm
storage = _request_storage(request)
if request.method == 'POST':
form = PersonalForm(request.POST)
if form.is_valid():
storage.store_form('personal_form_data', form)
return HttpResponseRedirect(reverse_i18n('registration_company'))
else:
if request.GET.get('revalidate', False):
form = storage.retrieve_form('personal_form_data', PersonalForm)
else:
form = storage.retrieve_initial_form('personal_form_data', PersonalForm)
return render_to_response('registration/personal.html', {
'form': form,
'step': 'personal',
'previous': _previous_steps(request),
}, context_instance=RequestContext(request))
# the other steps are pretty much the same
# storage.py
class Storage(object):
def __init__(self, request, prefix):
self.request = request
self.prefix = prefix
def _debug(self):
from pprint import pprint
self._init_storage()
pprint(self.request.session[self.prefix])
def exists(self):
return self.prefix in self.request.session
def has(self, key):
if self.prefix in self.request.session:
return key in self.request.session[self.prefix]
return False
def _init_storage(self,):
if not self.prefix in self.request.session:
self.request.session[self.prefix] = {}
self.request.session.modified = True
def clear(self):
self.request.session[self.prefix] = {}
self.request.session.modified = True
def store_data(self, key, data):
self._init_storage()
self.request.session[self.prefix][key] = data
self.request.session.modified = True
def update_data(self, key, data):
self._init_storage()
if key in self.request.session[self.prefix]:
self.request.session[self.prefix][key].update(data)
self.request.session.modified = True
else:
self.store_data(key, data)
def retrieve_data(self, key, fallback=None):
self._init_storage()
return self.request.session[self.prefix].get(key, fallback)
def store_form(self, key, form):
self.store_data(key, form.data)
def retrieve_form_data(self, key):
return self.retrieve_data(key)
def retrieve_form(self, key, form_class):
data = self.retrieve_form_data(key)
form = form_class(data=data)
return form
def retrieve_initial_form(self, key, form_class):
data = self.retrieve_form_data(key)
form = form_class(initial=self.convert_form_data_to_initial(data))
return form
def convert_form_data_to_initial(self, data):
result = {}
if data is None:
return result
for key in data:
try:
values = data.getlist(key)
if len(values) > 1:
result[key] = values
else:
result[key] = data.get(key)
except AttributeError:
result[key] = data.get(key)
return result
def retrieve_process_form(self, key, form_class, initial=None):
if request.method == 'POST':
return form_class(data=request.POST)
else:
data = self.get_form_data(key)
initial = self.convert_form_data_to_initial(data) or initial
return form_class(initial=initial)
I've tested this across browser, computers and networks. When I clear the cache manually it works again.
(Easy to see with just debugging print() statements which do not get called with a warm cache.)
It can't be too hard to selectively disable caching, right?
Additional question:
Could it be that the #never_cache() decorator just prevents browser-caching and has nothing to do with the Redis cache?
I'm new to python REST API and so facing some particular problems. I want that when I enter the input as pathlabid(primary key), I want the corresponding data assigned with that key as output. When I run the following code i only get the data corresponding to the first row of table in database even when the id i enter belong to some other row.
This is the VIEWS.PY
class pathlabAPI(View):
#csrf_exempt
def dispatch(self, *args, **kwargs):
# dont worry about the CSRF here
return super(pathlabAPI, self).dispatch(*args, **kwargs)
def post(self, request):
post_data = json.loads(request.body)
Pathlabid = post_data.get('Pathlabid') or ''
lablist = []
labdict = {}
lab = pathlab()
labs = lab.apply_filter(Pathlabid = Pathlabid)
if Pathlabid :
for p in labs:
labdict["Pathlabid"] = p.Pathlabid
labdict["name"] = p.Name
labdict["email_id"] = p.Emailid
labdict["contact_no"] = p.BasicContact
labdict["alternate_contact_no"] = p.AlternateContact
labdict["bank_account_number"] = p.Accountnumber
labdict["ifsccode"] = p.IFSCcode
labdict["country"] = p.Country
labdict["homepickup"] = p.Homepickup
lablist.append(labdict)
return HttpResponse(json.dumps(lablist))
else:
for p in labs:
labdict["bank_account_number"] = p.Accountnumber
lablist.append(labdict)
return HttpResponse(json.dumps(lablist))
There are a number of issues with the overall approach and code but to fix the issue you're describing, but as a first fix I agree with the other answer: you need to take the return statement out of the loop. Right now you're returning your list as soon as you step through the loop one time, which is why you always get a list with one element. Here's a fix for that (you will need to add from django.http import JsonResponse at the top of your code):
if Pathlabid:
for p in labs:
labdict["Pathlabid"] = p.Pathlabid
labdict["name"] = p.Name
labdict["email_id"] = p.Emailid
labdict["contact_no"] = p.BasicContact
labdict["alternate_contact_no"] = p.AlternateContact
labdict["bank_account_number"] = p.Accountnumber
labdict["ifsccode"] = p.IFSCcode
labdict["country"] = p.Country
labdict["homepickup"] = p.Homepickup
lablist.append(labdict)
else:
for p in labs:
labdict["bank_account_number"] = p.Accountnumber
lablist.append(labdict)
return JsonResponse(json.dumps(lablist))
As suggested in the comments, using Django Rest Framework or a similar package would be an improvement. As a general rule, in Django or other ORMs, you want to avoid looping over a queryset like this and adjusting each element. Why not serialize the queryset itself and do the logic that's in this loop in your template or other consumer?
You are return the response in for loop so that loop break on 1st entry
import json
some_list = []
for i in data:
some_list.append({"key": "value"})
return HttpResponse(json.dumps({"some_list": some_list}), content_type="application/json")
Try above example to solve your problem
I wanted to create a proper post_create (also post_get and post_put) hooks, similar to the ones I had on the DB version of my app.
Unfortunately I can't use has_complete_key.
The problem is quite known: lack of is_saved in a model.
Right now I have implemented it like this:
class NdbStuff(HooksInterface):
def __init__(self, *args, **kwds):
super(NdbStuff, self).__init__(*args, **kwds)
self._is_saved = False
def _put_async(self, post_hooks=True, **ctx_options):
""" Implementation of pre/post create hooks. """
if not self._is_saved:
self._pre_create_hook()
fut = super(NdbStuff, self)._put_async(**ctx_options)
if not self._is_saved:
fut._immediate_callbacks.insert(
0,
(
self._post_create_hook,
[fut],
{},
)
)
self._is_saved = True
if post_hooks is False:
fut._immediate_callbacks = []
return fut
put_async = _put_async
#classmethod
def _post_get_hook(cls, key, future):
obj = future.get_result()
if obj is not None:
obj._is_saved = True
cls._post_get(key, future)
def _post_put_hook(self, future):
if future.state == future.FINISHING:
self._is_saved = True
else:
self._is_saved = False
self._post_put(future)
Everything except the post_create hook seems to work.
The post_create is triggered every time the I use put_async without retrieving the object first.
I would really appreciate a clue on how to trigger the post_create_hook only once after the object was created.
I am not sure why you are creating the NDBStuff class.
Any way if you creating an instance of a class, and you want to track _is_saved or something similar , use a factory to control creation and setting of the property, in this case it makes more sense to track _is_new for example.
class MyModel(ndb.Model):
some_prop = ndb.StringProperty()
def _pre_put_hook(self):
if getattr(self,'_is_new',None):
self._pre_create_hook()
# do something
def _pre_create_hook(self):
# do something on first save
log.info("First put for this object")
def _post_create_hook(self, future):
# do something
def _post_put_hook(self, future);
if getattr(self,'_is_new', None):
self._post_create_hook(future)
# Get rid of the flag on successful put,
# in case you make some changes and save again.
delattr(self,'_is_new')
#classmethod
def factory(cls,*args,**kwargs):
new_obj = cls(*args,**kwargs)
settattr(new_obj,'_is_new',True)
return new_obj
Then
myobj = MyModel.factory(someargs)
myobj.put()
myobj.some_prop = 'test'
myobj.put()
Will call the _pre_create_hook on the first put, and not on the second.
Always create the entities through the factory then you will always have the to call to _pre_create_hook executed.
Does that make sense ?
I have built an application that has a lot of similar views that should be able to use the same base code. However each method has certain unique characteristics at various inflection points within the methods such that I can't figure out a way to structure this to actually reuse any code. Instead I've created a cut-and-paste methodology and tweaked each method individually. This part of the application was some of the first Python code I ever wrote and know there must be a better way to do this, but I got locked into doing it this way and "it works" so I can't see a way out.
Here's what the base view template essentially looks like:
def view_entity(request, entity_id=None):
if request.method == 'POST':
return _post_entity(request, entity_id)
else:
return _get_entity(request, entity_id)
def _get_entity(request, entity_id):
data = _process_entity(request, entity_id)
if 'redirect' in data:
return data['redirect']
else:
return _render_entity(request, data['form'])
def _post_entity(request, entity_id):
data = _process_entity(request, entity_id)
if 'redirect' in data:
return data['redirect']
elif data['form'].is_valid():
# custom post processing here
instance = data['form'].save()
return HttpResponseRedirect(reverse('entity', args=[instance.id]))
else:
return _render_entity(request, data['form'])
def _process_entity(request, entity_id):
data = {}
if entity_id != 'new': # READ/UPDATE
# sometimes there's custom code to retrieve the entity
e = entity_id and get_object_or_404(Entity.objects, pk=entity_id)
# sometimes there's custom code here that deauthorizes e
# sometimes extra values are added to data here (e.g. parent entity)
if e:
if request.method == 'POST':
data['form'] = EntityForm(request.POST, instance=e)
# sometimes there's a conditional here for CustomEntityForm
else:
data['form'] = EntityForm(instance=e)
else: # user not authorized for this entity
return {'redirect': HttpResponseRedirect(reverse('home'))}
# sometimes there's custom code here for certain entity types
else: # CREATE
if request.method == 'POST':
data['form'] = EntityForm(request.POST)
else:
data['form'] = EntityForm()
# sometimes extra key/values are added to data here
return data
I didn't even include all the possible variations, but as you can see, the _process_entity method requires a lot of individual customization based upon the type of entity being processed. This is the primary reason I can't figure out a DRY way to handle this.
Any help is appreciated, thanks!
Use class based views. You can use inheritance and other features from classes to make your views more reusable. You can also use built-in generic views for simplifying some of the basic tasks.
Check class-based views documentation. You can also read this this
So I did end up refactoring the code into a base class that all my views inherit from. I didn't end up refactoring into multiple views (yet), but instead solved the problem of having custom processing methods by inserting hooks within the processing method.
Here's the gist of the base class that inherits from DetailView:
class MyDetailView(DetailView):
context = {}
def get(self, request, *args, **kwargs):
self._process(request, *args, **kwargs)
if 'redirect' in self.context:
return HttpResponseRedirect(self.context['redirect'])
else:
return self._render(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
self._process(request, *args, **kwargs)
if 'redirect' in self.context:
return HttpResponseRedirect(self.context['redirect'])
elif self.context['form'].is_valid():
self._get_hook('_pre_save')(request, *args, **kwargs)
return self._save(request, *args, **kwargs)
else:
return self._render(request, *args, **kwargs)
def _process(self, request, *args, **kwargs):
form = getattr(app.forms, '%sForm' % self.model.__name__)
if kwargs['pk'] != 'new': # READ/UPDATE
self.object = self.get_object(request, *args, **kwargs)
self._get_hook('_auth')(request, *args, **kwargs)
if not self.object: # user not authorized for this entity
return {'redirect': reverse(
'%s_list' % self.model.__name__.lower())}
self.context['form'] = form(
data=request.POST if request.method == 'POST' else None,
instance=self.object if hasattr(self, 'object') else None)
self._get_hook('_post_process')(request, *args, **kwargs)
def _get_hook(self, hook_name):
try:
return getattr(self, '%s_hook' % hook_name)
except AttributeError, e:
def noop(*args, **kwargs):
pass
return noop
The key part to note is the _get_hook method and the places within the other methods that I use it. That way, in some complex view I can inject custom code like this:
class ComplexDetailView(MyDetailView):
def _post_process_hook(self, request, *args, **kwargs):
# here I can add stuff to self.context using
# self.model, self.object, request.POST or whatever
This keeps my custom views small since they inherit the bulk of the functionality but I can add whatever tweaks are necessary for that specific view.
Objective:
Given something like:
stackoverflow.users['55562'].questions.unanswered()
I want it converted into the following:
http://api.stackoverflow.com/1.1/users/55562/questions/unanswered
I have been able to achieve that, using the following class:
class SO(object):
def __init__(self,**kwargs):
self.base_url = kwargs.pop('base_url',[]) or 'http://api.stackoverflow.com/1.1'
self.uriparts = kwargs.pop('uriparts',[])
for k,v in kwargs.items():
setattr(self,k,v)
def __getattr__(self,key):
self.uriparts.append(key)
return self.__class__(**self.__dict__)
def __getitem__(self,key):
return self.__getattr__(key)
def __call__(self,**kwargs):
return "%s/%s"%(self.base_url,"/".join(self.uriparts))
if __name__ == '__main__':
print SO().abc.mno.ghi.jkl()
print SO().abc.mno['ghi'].jkl()
#prints the following
http://api.stackoverflow.com/1.1/abc/mno/ghi/jkl
http://api.stackoverflow.com/1.1/abc/mno/ghi/jkl
Now my problem is I can't do something like:
stackoverflow = SO()
user1 = stackoverflow.users['55562']
user2 = stackoverflow.users['55462']
print user1.questions.unanswered
print user2.questions.unanswered
#prints the following
http://api.stackoverflow.com/1.1/users/55562/users/55462/questions/unanswered
http://api.stackoverflow.com/1.1/users/55562/users/55462/questions/unanswered/questions/unanswered
Essentially, the user1 and user2 refer to the same SO object, so it can't represent different users.
I have been thinking any pointers to do that would be helpful, because this additional level of functionality would make the API far more interesting.
IMHO, when you recreate a new stackoverflow object, you need to separate the arguments from old instance attributes with a deep copy
import copy
........
def __getattr__(self,key):
dict = copy.deepcopy(self.__dict__)
dict['uriparts'].append(key)
return self.__class__(**dict)
....
If you want more flexibility on the URI parts, an abstraction is needed for a cleaner design. For example:
class SOURIParts(object):
def __init__(self, so, uriparts, **kwargs):
self.so = so
self.uriparts = uriparts
for k,v in kwargs.items():
setattr(self,k,v)
def __getattr__(self,key):
return SOURIParts(self.so, self.uriparts+[key])
def __getitem__(self,key):
return self.__getattr__(key)
def __call__(self,**kwargs):
return "%s/%s"%(self.so.base_url,"/".join(self.uriparts))
class SO(object):
def __init__(self, base_url='http://api.stackoverflow.com/1.1'):
self.base_url = base_url
def __getattr__(self,key):
return SOURIParts(self, [])
def __getitem__(self,key):
return self.__getattr__(key)
I hope this helps.
You could override __getslice__(Python 2.7), or getitem()(Python3.x) and use a memorizing decorator so that if the slice you request (the userid) has already been looked up it would use cached results -- otherwise it could retrieve the results and populate the existing SO instance object.
However, I think a more OO way to solve the problem is make SO a pure lookup module that returns stack overflow user objects which would then have the deeper-digging lookups for profile details. But thats just me.