Using:
django 1.10 reversion 2.0.8.
My question is how to show a nice list of changes done to a given model instance. By that I mean that the user can quickly see a list of all the changes (new values for fields) in all revisions. He doesn't need o see all the fields only the new values of the changed ones.
So I found that a good tool for storing changes is django-reversion. However, I cannot find a solution for my problem which as I mentioned is to show a nice change-log history for a given model instance.
I found solution that can compare two revisions django-reversion-compare, but that is not what I am looking for. Maybe there is a better tool for that ?
The task is too quickly show to user what was changed by who and when. The model is simple and doesn't store a lot of data. It does store however foreign keys.
I was also looking to do the same, and after reading up a few SO posts, docs etc., it seems I had to roughly choose the solution from one of the following 3 approaches:
1) Fetch the existing model instance before saving the new model instance. Compare each field. Put the changed field in reversion.set_comment('(all changes here)'). Continue with saving the model instance.
2) Save a copy of the old fields separately in model's __init__() and later compare the new fields with them (in model's save()) to track what changed. Put the changed fields in reversion.set_comment('(all changes here)'). Continue with saving the model instance. (This approach will save a DB lookup)
3) Generate a diff using django-reversion's low-level API and integrate with the Admin somehow
I ended up using django-reversion-compare which worked great for me showing the edits wiki-style (which may be using (3) above anyways)
django-reversion's developer also confirmed (3) as a better option which also avoids race condition.
If you would like to explore different options, this is a great SO post with lots of good ideas with their pros/cons.
(I am also on Django 1.10)
Related
I am making a CRM, and I ran into one task: I want to make a “Client” model, with all possible fields, and give an opportunity for users to “enable” only those “Client” fields that he needs.
I have little experience and unfortunately I have not been able to find a solution for this for a long time.
I would be grateful if someone can show me an example of how this is done (or a link to a repository with a similar method).
The answer is JSONField, new DBMSs support JSON Fields natively and Django has a native support from 3.0, so you can add the extra fields as attributes in a JSON and save to the a column called extra for example you want to add a field called 'mobile2' so rather than creating the column, so you can add it to the extra column like this
obj.extra["mobile2"] = "0xxxxx"
this allows you to extend quickly and give different attributes per user.
Context:
I maintain legacy Django code. In many cases my code receives multiple model objects, each of which has a ForeignKey or a manually cached property that represents the same data entity. The data entities referred to thus do not change. However, not all objects I receive have ever accessed those ForeignKey fields or cached properties, so the data may not be present on those objects, though it will be lazy-loaded on first access.
I do not easily have access to/control over the code that declares the models.
I want to find any one of the objects I have which has a primed cache, so that I can avoid hitting the database to retrieve the data I'm after (if none have it, I'll do it if I have to). This is because the fetch of that data is frequent enough that it causes performance issues.
Django 1.6, Python 2.7.
Problem
I can interrogate our manually cached fields and say internal_is_cached(instance, 'fieldname') without running a query. However, I cannot do this with ForeignKey fields.
Say I have a model class in Django, Foo, like so:
class Foo(models.Model):
bar = models.ForeignKey('BarModel')
Question
If I get an instance of the model Foo from somewhere, but I do not know if bar has ever been called on it, or if it has been eagerly fetched, how do I determine if reading instance.bar will query the database or not?
In other words, I want to externally determine if an arbitrary model has its internal cache primed for a given ForeignKey, with zero other knowledge about the state or source of that model.
What I've Tried
I tried model caching using the Django cache to make an "end run" around the issue. The fetches of the related data are frequent enough that they caused unsustainable load on our caching systems.
I tried various solutions from this question. They work well for modified models, but do not seem to work for models which haven't been mutated--I'm interested in the "lazy load" state of a model, not the "pending modification" state. Many of those solutions are also inapplicable since they require changing model inheritance or behavior, which I'd like to avoid if possible (politics).
This question looked promising, but it requires control over the model initial-reader process. The model objects my code receives could come from anywhere.
Doing the reverse of the after-the-fact cache priming described in [this writeup] works for me in testing. However, it relies on the _default_manager internal method of model objects, which is known to be an inaccurate field reference for some of our (highly customized) model objects in production. Some of them are quite weird, and I'd prefer to stick to documented (or at least stable and not frequently bypassed) APIs if possible.
Thanks #Daniel Roseman for the clarification.
With Django version <= 1.6 (working solution in your case)
You can check if your foo_instance has a _bar_cache attribute:
hasattr(foo_instance, "_bar_cache")
Like explained here.
With Django version > 1.6
The cached fields are now stored in the fields_cache dict in the _state attribute:
foo_instance._state.fields_cache["bar"]
I'm pretty novice so I'll try to explain in a way that you can understand what I mean.
I'm coding a simple application in Django to track cash operations, track amounts, etc.
So I have an Account Model (with an amount field to track how many money is inside) and an Operation Model(with an amount field as well).
I've created a model helper called Account.add_operation(amount). Here is my question:
Should I include inside the code to create the new Operation inside Account.add_operation(amount) or should I do it in the Views?
And, should I call the save() method in the models (for example at the end of Account.add_operation() or must it be called in the views?)
What's the best approach, to have code inside the models or inside the views?
Thanks for your attention and your patience.
maybe you could use the rule "skinny controllers, fat models" to decide. Well in django it would be "skinny views".
To save related objects, in your case Operation I'd do it in the save() method or use the pre_save signal
Hope this helps
Experienced Django users seem to always err on the side of putting code in models. In part, that's because it's a lot easier to unit test models - they're usually pretty self-contained, whereas views touch both models and templates.
Beyond that, I would just ask yourself if the code pertains to the model itself or whether it's specific to the way it's being accessed and presented in a given view. I don't entirely understand your example (I think you're going to have to post some code if you want more specific help), but everything you mention sounds to me like it belongs in the model. That is, creating a new Operation sounds like it's an inherent part of what it means to do something called add_operation()!
I was wandering if there's a way to get notified of any changes to objects in a Django database. Right now I just need an email if anybody adds or changes anything but it would be best if I could hook a function triggered by any change and could decide what to do.
Is there an easy way to do it in Django?
Two ideas come to mind:
Override the predefined model method for saving.
Use a signal like post_save.
Here is a good article that talks about the difference between the two things listed above and when to use them:
Django signals vs. custom save()-method
The article was written near the end of 2007, three days after the release of Django 0.96.1. However, I believe the advice the author gives still applies today.
I have a database model that is being updated based on changes in remote data (via an HTML scraper).
I want to maintain a field called changed - a timestamp denoting when the last time that model's values changed from what they were previously (note that this is different from auto_now as these fields are updated every time a model's save method is called).
Here is my question:
In a model's save method, is there a straightforward way to detect if a model instance's current values are different from the values in the database? Or, are there any alternative methods to easily maintain a changed timestamp?
If you save your instance through a form, you can check form.has_changed().
http://code.activestate.com/pypm/django-dirtyfields/
Tracks dirty/changed fields on a django model instance.
Sounds to me like what you want is Signals: http://docs.djangoproject.com/en/1.2/topics/signals/
You could use a post_save signal to update a related field in another model to store the previous value. Then on the next go-round you'd have something to compare.
You might try computing a checksum of the record values when you save them. Then when you read it later, recompute the checksum and see if it has changed. Perhaps the crc32 function in the Python zlib standard module. (I'm not sure what kind of performance this would have. So you may want to investigate that.)
This library has tracks FK lookups.
https://github.com/mmilkin/django_dirty_bits