In views.py I have:
my_computer = Computer.objects.get(pk=some_value)
The computer object has a field called projects that's a ManyRelatedManager.
Calling
my_projects = my_computer.projects.all()
will set the value of my_projects to a list of three project objects.
What I'm trying to achive is to set the value of my_computer.projects to the above list of projects instead of the ManyRelatedManager.
I have tried:
my_computer.projects = my_projects
but that doesn't work, although it doesn't raise an error either. The value of my_computer.projects is still the ManyRelatedManager.
Manager objects implement __set__ - they behave as descriptors.
This means you cannot change the object by assigning it (as long as its attribute of another object - __set__ is only called in the context of __setattr__ on the parent object - parent regarding composition relationships, and not inheritance relationships).
You can assign any list-like (actually: iterable) value to a manager if such iterable value yields models of the expected type. However this means:
When you query my_computer.projects, you will get again a manager object, with the objects you assigned.
When you save the object my_computer, only the specified objects will belong to the relationship - previous object in the relationship will not be related anymore to the current object.
There are three scenarios you could have which led you to this issue:
You need to hold a volatile list - this data is not stored, in any way, but used temporarily. You have to create a normal attribute in the class:
class Computer(models.Model):
#normal database fields here
def __init__(self, *args, **kwargs):
super(Computer, self).__init__(*args, **kwargs)
#ENSURE this attribute name does not collide with any field
#I'm assuming the Many manager name is projects.
self.my_projects = []
You need another representation of the exact same relationship - in this way, you want a comfortable way to access the object, instead of calling a strange .all(), e.g. to do a [k.foo for k in mycomputer.my_projects]. You have to create a property like this:
class Computer(models.Model):
#Normal database fields here
#I'm assuming the Many manager name is projects.
#property
def my_projects(self):
#remember: my_projects is another name.
#it CANNOT collide, so I have another
#name - cannot use projects as name.
return list(self.projects.all())
#my_projects.setter
def my_projects(self, value):
#this only abstracts the name, to match
#the getter.
self.projects = value
You need another relationship (so it's not volatile data): Create ANOTHER relationship in your model, pointing to the same mode, using the same through if applicable, but using a different related_name= (you must explicitly set related_name for at least one of the multiple relationships to the same model, from the same model)
You can't do that. Your best bet is to simply use another attribute name.
my_computer.related_projects = list(my_computer.projects.all())
Related
I've extended a default class using _inherits. I am using Odoo v9.
class new_product_uom(models.Model):
_inherits = {'product.uom':'uomid', }
_name = "newproduct.uom"
uomid = fields.Many2one('product.uom',ondelete='cascade', required=True).
#declare variables and functions specific to new_product_uom
sellable = fields.Boolean('Sell products using this UoM?', default=True)
[...]
If I delete the corresponding record in product.uom, the new_product_uom is deleted.
If I were to delete a new_product_uom record, nothing happens to the corresponding product_uom record.
I'd like for BOTH records to be automatically deleted when either is deleted. Is there a way I can do this? Thanks in advance for the help.
Clarification:
product.uom is a default odoo class. It holds UoM records (inches, centimeters, etc). I use delegation inheritance to extend this class. See:
https://www.odoo.com/documentation/9.0/howtos/backend.html#model-inheritance
So, when I add a record for newproduct.uom, a record is automatically created under the model product.uom. I can assign the values of the corresponding record in product.uom by addressing them in newproduct.uom.
For my uses, it will be intended as a Parent->child relation, with newproduct.uom being the parent, and the default product.uom being the child. I chose this method of inheritance to allow quicker creation and modification of related values, as well as a separation of functions (rather than overriding the default methods for default operations).
In your parent class override unlink. Not sure if I have the correct class name. Delete the child record and then delete the current record.
#api.multi
def unlink(self):
self.uom_id.unlink()
return super(new_product_uom, self).unlink()
I am using factory.LazyAttribute within a SubFactory call to pass in an object, created in the factory_parent. This works fine.
But if I pass the object created to a RelatedFactory, LazyAttribute can no longer see the factory_parent and fails.
This works fine:
class OKFactory(factory.DjangoModelFactory):
class = Meta:
model = Foo
exclude = ['sub_object']
sub_object = factory.SubFactory(SubObjectFactory)
object = factory.SubFactory(ObjectFactory,
sub_object=factory.LazyAttribute(lambda obj: obj.factory_parent.sub_object))
The identical call to LazyAttribute fails here:
class ProblemFactory(OKFactory):
class = Meta:
model = Foo
exclude = ['sub_object', 'object']
sub_object = factory.SubFactory(SubObjectFactory)
object = factory.SubFactory(ObjectFactory,
sub_object=factory.LazyAttribute(lambda obj: obj.factory_parent.sub_object))
another_object = factory.RelatedFactory(AnotherObjectFactory, 'foo', object=object)
The identical LazyAttribute call can no longer see factory_parent, and can only access AnotherObject values. LazyAttribute throws the error:
AttributeError: The parameter sub_object is unknown. Evaluated attributes are...[then lists all attributes of AnotherObjectFactory]
Is there a way round this?
I can't just put sub_object=sub_object into the ObjectFactory call, ie:
sub_object = factory.SubFactory(SubObjectFactory)
object = factory.SubFactory(ObjectFactory, sub_object=sub_object)
because if I then do:
object2 = factory.SubFactory(ObjectFactory, sub_object=sub_object)
a second sub_object is created, whereas I need both objects to refer to the same sub_object. I have tried SelfAttribute to no avail.
I think you can leverage the ability to override parameters passed in to the RelatedFactory to achieve what you want.
For example, given:
class MyFactory(OKFactory):
object = factory.SubFactory(MyOtherFactory)
related = factory.RelatedFactory(YetAnotherFactory) # We want to pass object in here
If we knew what the value of object was going to be in advance, we could make it work with something like:
object = MyOtherFactory()
thing = MyFactory(object=object, related__param=object)
We can use this same naming convention to pass the object to the RelatedFactory within the main Factory:
class MyFactory(OKFactory):
class Meta:
exclude = ['object']
object = factory.SubFactory(MyOtherFactory)
related__param = factory.SelfAttribute('object')
related__otherrelated__param = factory.LazyAttribute(lambda myobject: 'admin%d_%d' % (myobject.level, myobject.level - 1))
related = factory.RelatedFactory(YetAnotherFactory) # Will be called with {'param': object, 'otherrelated__param: 'admin1_2'}
I solved this by simply calling factories within #factory.post_generation. Strictly speaking this isn't a solution to the specific problem posed, but I explain below in great detail why this ended up being a better architecture. #rhunwick's solution does genuinely pass a SubFactory(LazyAttribute('')) to RelatedFactory, however restrictions remained that meant this was not right for my situation.
We move the creation of sub_object and object from ProblemFactory to ObjectWithSubObjectsFactory (and remove the exclude clause), and add the following code to the end of ProblemFactory.
#factory.post_generation
def post(self, create, extracted, **kwargs):
if not create:
return # No IDs, so wouldn't work anyway
object = ObjectWithSubObjectsFactory()
sub_object_ids_by_code = dict((sbj.name, sbj.id) for sbj in object.subobject_set.all())
# self is the `Foo` Django object just created by the `ProblemFactory` that contains this code.
for another_obj in self.anotherobject_set.all():
if another_obj.name == 'age_in':
another_obj.attribute_id = sub_object_ids_by_code['Age']
another_obj.save()
elif another_obj.name == 'income_in':
another_obj.attribute_id = sub_object_ids_by_code['Income']
another_obj.save()
So it seems RelatedFactory calls are executed before PostGeneration calls.
The naming in this question is easier to understand, so here is the same solution code for that sample problem:
The creation of dataset, column_1 and column_2 are moved into a new factory DatasetAnd2ColumnsFactory, and the code below is then added to the end of FunctionToParameterSettingsFactory.
#factory.post_generation
def post(self, create, extracted, **kwargs):
if not create:
return
dataset = DatasetAnd2ColumnsFactory()
column_ids_by_name =
dict((column.name, column.id) for column in dataset.column_set.all())
# self is the `FunctionInstantiation` Django object just created by the `FunctionToParameterSettingsFactory` that contains this code.
for parameter_setting in self.parametersetting_set.all():
if parameter_setting.name == 'age_in':
parameter_setting.column_id = column_ids_by_name['Age']
parameter_setting.save()
elif parameter_setting.name == 'income_in':
parameter_setting.column_id = column_ids_by_name['Income']
parameter_setting.save()
I then extended this approach passing in options to configure the factory, like this:
whatever = WhateverFactory(options__an_option=True, options__another_option=True)
Then this factory code detected the options and generated the test data required (note the method is renamed to options to match the prefix on the parameter names):
#factory.post_generation
def options(self, create, not_used, **kwargs):
# The standard code as above
if kwargs.get('an_option', None):
# code for custom option 'an_option'
if kwargs.get('another_option', None):
# code for custom option 'another_option'
I then further extended this. Because my desired models contained self joins, my factory is recursive. So for a call such as:
whatever = WhateverFactory(options__an_option='xyz',
options__an_option_for_a_nested_whatever='abc')
Within #factory.post_generation I have:
class Meta:
model = Whatever
# self is the top level object being generated
#factory.post_generation
def options(self, create, not_used, **kwargs):
# This generates the nested object
nested_object = WhateverFactory(
options__an_option=kwargs.get('an_option_for_a_nested_whatever', None))
# then join nested_object to self via the self join
self.nested_whatever_id = nested_object.id
Some notes you do not need to read as to why I went with this option rather than #rhunwicks's proper solution to my question above. There were two reasons.
The thing that stopped me experimenting with it was that the order of RelatedFactory and post-generation is not reliable - apparently unrelated factors affect it, presumably a consequence of lazy evaluation. I had errors where a set of factories would suddenly stop working for no apparent reason. Once was because I renamed the variables RelatedFactory were assigned to. This sounds ridiculous but I tested it to death (and posted here) but there is no doubt - renaming the variables reliably switched the sequence of RelatedFactory and post-gen execution. I still assumed this was some oversight on my behalf until it happened again for some other reason (which I never managed to diagnose).
Secondly I found the declarative code confusing, inflexible and hard to re-factor. It isn't straightforward to pass different configurations during instantiation so that the same factory can be used for different variations of test data, meaning I had to repeat code, object needs adding to a Factory Meta.exclude list - sounds trivial but when you've pages of code generating data it was a reliable error. As a developer you'd have to pass over several factories several times to understand the control flow. Generation code would be spread between the declarative body, until you'd exhausted these tricks, then the rest would go in post-generation or get very convoluted. A common example for me is a triad of interdependent models (eg, a parent-children category structure or dataset/attributes/entities) as a foreign key of another triad of inter-dependent objects (eg, models, parameter values, etc, referring to other models' parameter values). A few of these types of structures, especially if nested, quickly become unmanagable.
I realize it isn't really in the spirit of factory_boy, but putting everything into post-generation solved all these problems. I can pass in parameters, so the same single factory serves all my composite model test data requirements and no code is repeated. The sequence of creation is easy to see immediately, obvious and completely reliable, rather than depending on confusing chains of inheritance and overriding and subject to some bug. The interactions are obvious so you don't need to digest the whole thing to add some functionality, and different areas of funtionality are grouped in the post-generation if clauses. There's no need to exclude working variables and you can refer to them for the duration of the factory code. The unit test code is simplified, because describing the functionality goes in parameter names rather than Factory class names - so you create data with a call like WhateverFactory(options__create_xyz=True, options__create_abc=True.., rather than WhateverCreateXYZCreateABC..(). This makes a nice division of responsibilities quite clean to code.
Is it possible to use self as a reference in the __init__ method when the object is not instantiated yet?
What I'm trying to do is :
class MyClass(models.Model)
__init__(self):
some_attributes = AnotherClass.objects.filter(foreignkey=self)
The thing is that as the instance of MyClass is not registered in db yet, I have an exception like "MyClass has not attribute id"
I tried to add
if self.pk:
but it doesn't work. Is there a method like
if self.is_saved_in_db():
#some code
or do I have to created this one ?
EDIT
To be more specific, I'll give an example. I have a generic class which I try to hydrate with attributes from another Model.
class MyClass(models.Model)
_init__(self):
self.hydrate()
def hydrate(self):
# Retrieving the related objects
attributes = Information.objects.filter(...)
for attr in attributes:
attribute_id = attr.name.lower().replace(" ","_")
setattr(self,attribute_id,attr)
By doing so, I can access to attributes with MyClass.my_attribute.
For a small example, if we replace MyClass by Recipe and Information with Ingredients I can do :
pasta_recipe.pasta
pasta_recipie.tomato
pasta_recipie.onions
It's a simple parsing from a foreign_key to an attribute
By writing it, I realise that it's a bit useless because I can directly use ForeignKey relationships. I think I'll do that but for my own culture, is it possible do the filter with self as attribute before database saving ?
Thanks!
This is a very strange thing to do. I strongly recommend you do not try to do it.
(That said, the self.pk check is the correct one: you need to provide more details than "it doesn't work".)
Given a form class (somewhere deep in your giant Django app)..
class ContactForm(forms.Form):
name = ...
surname = ...
And considering you want to add another field to this form without extending or modifying the form class itself, why does not the following approach work?
ContactForm.another_field = forms.CharField(...)
(My first guess is that the metaclass hackery that Django uses applies only the first time the form class is constructed. If so, would there be a way to redeclare the class to overcome this?)
Some pertinent definitions occur in django/forms/forms.py. They are:
class BaseForm
class Form
class DeclarativeFieldsMetaclass
def get_declared_fields
get_declared_fields is called from DeclarativeFieldsMetaclass and constructs a list with the field instances sorted by their creation counter. It then prepends fields from the base classes to this list and returns the result as an OrderedDict instance with the field name serving as the keys. DeclarativeFieldsMetaclass then sticks this value in the attribute base_fields and calls to type to construct the class. It then passes the class to the media_property function in widgets.py and attaches the return value to the media attribute on the new class.
media_property returns a property method that reconstructs the media declarations on every access. My feeling is that it wont be relevant here but I could be wrong.
At any rate, if you are not declaring a Media attribute (and none of the base classes do) then it only returns a fresh Media instance with no arguments to the constructor and I think that monkeypatching a new field on should be as simple as manually inserting the field into base_fields.
ContactForm.another_field = forms.CharField(...)
ContactForm.base_fields['another_field'] = ContactForm.another_field
Each form instance then gets a deepcopy of base_fields that becomes form_instance.fields in the __init__ method of BaseForm. HTH.
I want to make attributes of GAE Model properties. The reason is for cases like to turn the value into uppercase before storing it. For a plain Python class, I would do something like:
Foo(db.Model):
def get_attr(self):
return self.something
def set_attr(self, value):
self.something = value.upper() if value != None else None
attr = property(get_attr, set_attr)
However, GAE Datastore have their own concept of Property class, I looked into the documentation and it seems that I could override get_value_for_datastore(model_instance) to achieve my goal. Nevertheless, I don't know what model_instance is and how to extract the corresponding field from it.
Is overriding GAE Property classes the right way to provides getter/setter-like functionality? If so, how to do it?
Added:
One potential issue of overriding get_value_for_datastore that I think of is it might not get called before the object was put into datastore. Hence getting the attribute before storing the object would yield an incorrect value.
Subclassing GAE's Property class is especially helpful if you want more than one "field" with similar behavior, in one or more models. Don't worry, get_value_for_datastore and make_value_from_datastore are going to get called, on any store and fetch respectively -- so if you need to do anything fancy (including but not limited to uppercasing a string, which isn't actually all that fancy;-), overriding these methods in your subclass is just fine.
Edit: let's see some example code (net of imports and main):
class MyStringProperty(db.StringProperty):
def get_value_for_datastore(self, model_instance):
vv = db.StringProperty.get_value_for_datastore(self, model_instance)
return vv.upper()
class MyModel(db.Model):
foo = MyStringProperty()
class MainHandler(webapp.RequestHandler):
def get(self):
my = MyModel(foo='Hello World')
k = my.put()
mm = MyModel.get(k)
s = mm.foo
self.response.out.write('The secret word is: %r' % s)
This shows you the string's been uppercased in the datastore -- but if you change the get call to a simple mm = my you'll see the in-memory instance wasn't affected.
But, a db.Property instance itself is a descriptor -- wrapping it into a built-in property (a completely different descriptor) will not work well with the datastore (for example, you can't write GQL queries based on field names that aren't really instances of db.Property but instances of property -- those fields are not in the datastore!).
So if you want to work with both the datastore and for instances of Model that have never actually been to the datastore and back, you'll have to choose two names for what's logically "the same" field -- one is the name of the attribute you'll use on in-memory model instances, and that one can be a built-in property; the other one is the name of the attribute that ends up in the datastore, and that one needs to be an instance of a db.Property subclass and it's this second name that you'll need to use in queries. Of course the methods underlying the first name need to read and write the second name, but you can't just "hide" the latter because that's the name that's going to be in the datastore, and so that's the name that will make sense to queries!
What you want is a DerivedProperty. The procedure for writing one is outlined in that post - it's similar to what Alex describes, but by overriding get instead of get_value_for_datastore, you avoid issues with needing to write to the datastore to update it. My aetycoon library has it and other useful properties included.