Prioritizing calculated fields that depend on each other - python

In my odoo instance I have several calculated fields on the analytic account object. These fields are calculated to ensure the viewer always has the most up to date overview.
Some of these fields depend on other fields that are by themselves calculated fields. The calculations by themselves are fairly simple (field A = field B + field C). Most of the fields are also depending on the underlying child ids. For example, field A on the top object is a summary of all field A values of the child ids. Field A on the children is calculated on their own field B and C combined, as described above.
The situation I currently find myself in is that for some reason the fields seem to be calculated in a random order. I noticed this because when I refresh in rapid succession I get different values for the same record.
Example:
Field B and C are both 10. I expect A to be 20 (B+C) but most of the times it's actually 0 because field calculation for A happens before B and C. Sometimes it's 10 since either B or C snuck in before A could finish. On very rare occasions it's actually 20....
Note:
- I cannot make the fields stored because they will depend on account move lines which are created at an incredible rate and the database will go absolutely nuts recalculating all records every minute or so.
- I already added the #api.depends but this is only useful if you use stored fields to determine that fields should trigger it, which is not applicable in my situation.
Does anyone know of a solution to this? Or have suggestions on alternative ways of calculating?
[EDIT] Added code
Example code:
#api.multi
#api.depends('child_ids','costs_allowed','total_cost')
def _compute_production_result(self):
for rec in self:
rec_prod_cost = 0.0
if rec.usage_type in ['contract','project']:
for child in rec.child_ids:
rec_prod_cost += child.production_result
elif rec.usage_type in ['cost_control','planning']:
rec_prod_cost = rec.costs_allowed - rec.total_cost
rec.production_result = rec_prod_cost
As you can see, if we are on a contract or project we need to look at the children (cost_control accounts) for their results and ADD them together. If we are actually on a cost_control account, then we can get the actual values by taking field B and C and (in this case) subtracting them.
The problem occurs when EITHER the contract records are handled before the cost_control OR the costs_allowed and total_cost fields are 0.0 when evaluating the cost_control accounts.
Mind you: costs_allowed and total_cost are both calculated fields in their own respect!

You can do as they did in Invoice, many computed fields depends on many other fields and they set a value for each computed field.
#api.one
#api.depends('X', 'Y', ...)
def _compute_amounts(self):
self.A = ...
self.B = ...
self.C = self.A + self.B

You may find python #properties helpful. Rather than just using plain fields this allows you do define something that looks like a field, but is lazy evaluated - i.e. calculated on demand when you 'get' it. This way we can guarantee it's up to date. An example:
import datetime
class Person(object):
def __init__(self):
self._born = datetime.datetime.now()
#property
def age(self):
return datetime.datetime.now() - self._born
p = Person()
# do some stuff...
# We can 'get' age just like a field, but it is lazy evaluated
# i.e. calculated on demand
# This way we can guarantee it's up to date
print(p.age)

So I managed to find a colleague and we figured it out together.
As it so turns out, when you define a method that calculates a field for both it's own record as well as depending on that field on child records, you need to explicitly mention this in the dependencies.
For example:
#api.multi
#api.depends('a', 'b', 'c')
def _compute_a(self):
for rec in self:
if condition:
rec.a = sum(child_ids.a)
else:
rec.a = rec.b + rec.c
In this example, the self object contains records (1,2,3,4).
If you include the dependency but otherwise let the code remain the same, like so:
#api.multi
#api.depends('a', 'b', 'c', 'child_ids.a')
def _compute_a(self):
for rec in self:
if condition:
rec.a = sum(child_ids.a)
else:
rec.a = rec.b + rec.c
will run this method 4 times, starting with the lowest/deepest candidate. So self in this case will be (4), then (3), etc.
Too bad this logic seems to be implied and not really described anywhere (as far as I could see).

Related

Should a class to have several attributes or one attribute as dictionary with many keys, Python3

I have class for calculating temperatures of an object at different positions, based on ambient temperature. I have two ways of implementation. In BodyA, the temperature of each position is an attribute of the class; while in BodyB, there is an attribute pos_t, which is a dict, and the temperature of each position is just a key-value pair in the dict.
class BodyA:
def __init__(self, ambient_temperature)
self.amb_t = ambient_temperature
self.pos1_t = self.amb_t + 1
self.pos2_t = self.amb_t * 2
self.pos3_t = self.pos1_t + self.pos2_t - 5
class BodyB:
def __init__(self, ambient_temperature)
self.amb_t = ambient_temperature
self.pos_t = dict()
self.pos_t['pos1'] = self.amb_t + 1
self.pos_t['pos2'] = self.amb_t * 2
self.pos_t['pos3'] = self.pos_t['pos1'] + self.pos_t['pos2'] - 5
In practical case, there are up-to 10 positions, and I want to build child-class from it. And some child-classes do not have certain positions. For example, pos2 can be missing in some child.
Could you please let me know, which design is better in terms of OOP and efficiency. Thanks.
A data structure to store some custom identifiers that may or may exist calls clearly for a dict. As class attributes are also stored in an internal dict, the first approach can be used too, but to have pratical manipulation without explicit hand writing the members will require different code. I suspect performance will not matter. If you find it matters, maybe a redesign of the data structure that does not use classes at all will do, as object creation processing time may be relevant then.

How to judge if a django model object is exactly a BaseClass instance?

I have a ParentModel model in django:
class ParentModel(models.Model):
field_a = models.IntegerField()
field_b = models.IntegerField()
class Meta:
db_table = 'table_parent'
And I defined a super class ChildModel after that:
class ChildModel(ParentModel):
field_c = models.IntegerField()
class Meta:
db_table = 'table_child'
The above will create two tables in my database, which we called table_parent and table_child.
So, now I create two instances:
first_obj = ParentModel.objects.create(...) # id=1
second_obj = ChildModel.objects.create(...) # id=2
And it will create two objects, which totally inserted two rows in table_parent and one row in table_child.
Now, if I fetch the instances, but both create from ParentModel:
first_obj = ParentModel.objects.get(id=1) # id=1
second_obj = ParentModel.objects.create(id=2) # id=2
So, in fact, the second_obj is a ChildModel instance. I want a neat way to judge it, like:
first_obj.is_exact_base() # I want it to be True
second_obj.is_exact_base() # I want it to be False
More, I may have more than one Super Classes of ParentModel, I want it can work well in that case.
My effort:
class ParentModel(models.Model):
...
def is_exact_base(self):
try:
child = self.childmodel
return False
except:
return True
This method can work, but too much redundancy, is there a best implementation for my problem?
Could you provide a complete minimum working example? I've never used Django and I don't even see what you try to do.
From a python programmer perspective (sorry, I'm convinced that I did not understand everything), if you want to know if objectA is an instance of BaseClass, what you want is:
isinstance(objectA, BaseInstance)
If this cannot be applied to your case, then Python has no way to tell the difference. Just like if you do:
def f(a, b):
a.append(b)
def g(a, b):
a.append(b)
a = []
f(a, 0)
g(a, 1)
>>> print(a)
[0, 1]
You have absolutely no way of telling which function appended what.
So if you are in this case, you need either to add a new column to your table which would contain that information or the object you are writing should contain that information.

How to avoid circular dependencies when setting Properties?

This is a design principle question for classes dealing with mathematical/physical equations where the user is allowed to set any parameter upon which the remaining are being calculated.
In this example I would like to be able to let the frequency be set as well while avoiding circular dependencies.
For example:
from traits.api import HasTraits, Float, Property
from scipy.constants import c, h
class Photon(HasTraits):
wavelength = Float # would like to do Property, but that would be circular?
frequency = Property(depends_on = 'wavelength')
energy = Property(depends_on = ['wavelength, frequency'])
def _get_frequency(self):
return c/self.wavelength
def _get_energy(self):
return h*self.frequency
I'm also aware of an update trigger timing problem here, because I don't know the sequence the updates will be triggered:
Wavelength is being changed
That triggers an updated of both dependent entities: frequency and energy
But energy needs frequency to be updated so that energy has the value fitting to the new wavelength!
(The answer to be accepted should also address this potential timing problem.)
So, what' the best design pattern to get around these inter-dependent problems?
At the end I want the user to be able to update either wavelength or frequency and frequency/wavelength and energy shall be updated accordingly.
This kind of problems of course do arise in basically all classes that try to deal with equations.
Let the competition begin! ;)
Thanks to Adam Hughes and Warren Weckesser from the Enthought mailing list I realized what I was missing in my understanding.
Properties do not really exist as an attribute. I now look at them as something like a 'virtual' attribute that completely depends on what the writer of the class does at the time a _getter or _setter is called.
So when I would like to be able to set wavelength AND frequency by the user, I only need to understand that frequency itself does not exist as an attribute and that instead at _setting time of the frequency I need to update the 'fundamental' attribute wavelength, so that the next time the frequency is required, it is calculated again with the new wavelength!
I also need to thank the user sr2222 who made me think about the missing caching. I realized that the dependencies I set up by using the keyword 'depends_on' are only required when using the 'cached_property' Trait. If the cost of calculation is not that high or it's not executed that often, the _getters and _setters take care of everything that one needs and one does not need to use the 'depends_on' keyword.
Here now the streamlined solution I was looking for, that allows me to set either wavelength or frequency without circular loops:
class Photon(HasTraits):
wavelength = Float
frequency = Property
energy = Property
def _wavelength_default(self):
return 1.0
def _get_frequency(self):
return c/self.wavelength
def _set_frequency(self, freq):
self.wavelength = c/freq
def _get_energy(self):
return h*self.frequency
One would use this class like this:
photon = Photon(wavelength = 1064)
or
photon = Photon(frequency = 300e6)
to set the initial values and to get the energy now, one just uses it directly:
print(photon.energy)
Please note that the _wavelength_default method takes care of the case when the user initializes the Photon instance without providing an initial value. Only for the first access of wavelength this method will be used to determine it. If I would not do this, the first access of frequency would result in a 1/0 calculation.
I would recommend to teach your application what can be derived from what. For example, a typical case is that you have a set of n variables, and any one of them can be derived from the rest. (You can model more complicated cases as well, of course, but I wouldn't do it until you actually run into such cases).
This can be modeled like this:
# variable_derivations is a dictionary: variable_id -> function
# each function produces this variable's value given all the other variables as kwargs
class SimpleDependency:
_registry = {}
def __init__(self, variable_derivations):
unknown_variable_ids = variable_derivations.keys() - self._registry.keys():
raise UnknownVariable(next(iter(unknown_variable_ids)))
self.variable_derivations = variable_derivations
def register_variable(self, variable, variable_id):
if variable_id in self._registry:
raise DuplicateVariable(variable_id)
self._registry[variable_id] = variable
def update(self, updated_variable_id, new_value):
if updated_variable_id not in self.variable_ids:
raise UnknownVariable(updated_variable_id)
self._registry[updated_variable_id].assign(new_value)
other_variable_ids = self.variable_ids.keys() - {updated_variable_id}
for variable_id in other_variable_ids:
function = self.variable_derivations[variable_id]
arguments = {var_id : self._registry[var_id] for var_id in other_variable_ids}
self._registry[variable_id].assign(function(**arguments))
class FloatVariable(numbers.Real):
def __init__(self, variable_id, variable_value = 0):
self.variable_id = variable_id
self.value = variable_value
def assign(self, value):
self.value = value
def __float__(self):
return self.value
This is just a sketch, I didn't test or think through every possible issue.

Fetching inherited model objects in django

I have a django application with the following model:
Object A is a simple object extending from Model with a few fields, and let's say, a particular one is a char field called "NAME" and an Integer field called "ORDER". A is abstract, meaning there are no A objects in the database, but instead...
Objects B and C are specializations of A, meaning they inherit from A and they add some other fields.
Now suppose I need all the objects whose field NAME start with the letter "Z", ordered by the ORDER field, but I want all the B and C-specific fields too for those objects. Now I see 2 approaches:
a) Do the queries individually for B and C objects and fetch two lists, merge them, order manually and work with that.
b) Query A objects for names starting with "Z" ordered by "ORDER" and with the result query the B and C objects to bring all the remaining data.
Both approaches sound highly inefficient, in the first one I have to order them myself, in the second one I have to query the database multiple times.
Is there a magical way I'm missing to fetch all B and C objects, ordered in one single method? Or at least a more efficient way to do this than the both mentioned?
Thanks in Advance!
Bruno
If A can be concrete, you can do this all in one query using select_related.
from django.db import connection
q = A.objects.filter(NAME__istartswith='z').order_by('ORDER').select_related('b', 'c')
for obj in q:
obj = obj.b or obj.c or obj
print repr(obj), obj.__dict__ # (to prove the subclass-specific attributes exist)
print "query count:", len(connection.queries)
This question was answered here.
Use the InheritanceManager from the django-model-utils project.
Querying using your "b" method, will allow for you to "bring in" all the remaining data without querying your B and C models separately. You can use the "dot lowercase model name" relation.
http://docs.djangoproject.com/en/dev/topics/db/models/#multi-table-inheritance
for object in A.objects.filter(NAME__istartswith='z').order_by('ORDER'):
if object.b:
// do something
pass
elif object.c:
// do something
pass
You may need to try and except DoesNotExist exceptions. I'm a bit rusty with my django. Good Luck.
So long as you order both queries on B and C, it is fairly easy to merge them without having to do an expensive resort:
# first define a couple of helper functions
def next_or(iterable, other):
try:
return iterable.next(), None
except StopIteration:
return None, other
def merge(x,y,func=lambda a,b: a<=b):
''' merges a pair of sorted iterables '''
xs = iter(x)
ys = iter(y)
a,r = next_or(xs,ys)
b,r = next_or(ys,xs)
while r is None:
if func(a,b):
yield a
a,r = next_or(xs,ys)
else:
yield b
b,r = next_or(ys,xs)
else:
if a is not None:
yield a
else:
yield b
for o in r:
yield o
# now get your objects & then merge them
b_qs = B.objects.filter(NAME__startswith='Z').order_by('ORDER')
c_qs = C.objects.filter(NAME__startswith='Z').order_by('ORDER')
for obj in merge(b_qs,c_qs,lambda a,b: a.ORDER <= b.ORDER):
print repr(obj), obj.__dict__
The advantage of this technique is it works with an abstract base class.

creating class instances from a list

Using python.....I have a list that contain names. I want to use each item in the list to create instances of a class. I can't use these items in their current condition (they're strings). Does anyone know how to do this in a loop.
class trap(movevariables):
def __init__(self):
movevariables.__init__(self)
if self.X==0:
self.X=input('Move Distance(mm) ')
if self.Vmax==0:
self.Vmax=input('Max Velocity? (mm/s) ')
if self.A==0:
percentg=input('Acceleration as decimal percent of g' )
self.A=percentg*9806.65
self.Xmin=((self.Vmax**2)/(2*self.A))
self.calc()
def calc(self):
if (self.X/2)>self.Xmin:
self.ta=2*((self.Vmax)/self.A) # to reach maximum velocity, the move is a symetrical trapezoid and the (acceleration time*2) is used
self.halfta=self.ta/2. # to calculate the total amount of time consumed by acceleration and deceleration
self.xa=.5*self.A*(self.halfta)**2
else: # If the move is not a trap, MaxV is not reached and the acceleration time is set to zero for subsequent calculations
self.ta=0
if (self.X/2)<self.Xmin:
self.tva=(self.X/self.A)**.5
self.halftva=self.tva/2
self.Vtriang=self.A*self.halftva
else:
self.tva=0
if (self.X/2)>self.Xmin:
self.tvc=(self.X-2*self.Xmin)/(self.Vmax) # calculate the Constant velocity time if you DO get to it
else:
self.tvc=0
self.t=(self.ta+self.tva+self.tvc)
print self
I'm a mechanical engineer. The trap class describes a motion profile that is common throughout the design of our machinery. There are many independent axes (trap classes) in our equipment so I need to distinguish between them by creating unique instances. The trap class inherits from movevariables many getter/setter functions structured as properties. In this way I can edit the variables by using the instance names. I'm thinking that I can initialize many machine axes at once by looping through the list instead of typing each one.
You could use a dict, like:
classes = {"foo" : foo, "bar" : bar}
then you could do:
myvar = classes[somestring]()
this way you'll have to initialize and keep the dict, but will have control on which classes can be created.
The getattr approach seems right, a bit more detail:
def forname(modname, classname):
''' Returns a class of "classname" from module "modname". '''
module = __import__(modname)
classobj = getattr(module, classname)
return classobj
From a blog post by Ben Snider.
If it a list of classes in a string form you can:
classes = ['foo', 'bar']
for class in classes:
obj = eval(class)
and to create an instance you simply do this:
instance = obj(arg1, arg2, arg3)
EDIT
If you want to create several instances of the class trap, here is what to do:
namelist=['lane1', 'lane2']
traps = dict((name, trap()) for name in namelist)
That will create a dictionary that maps each name to the instance.
Then to access each instance by name you do:
traps['lane1'].Vmax
you're probably looking for getattr.

Categories

Resources