I saw this code segment in subscription.py class. It gives selection and many2one fields together for users. I found in openerp documentation and another modules also but i never found any details or other samples for this
here is the its view
here is the code related to that field
'doc_source': fields.reference('Source Document', required=True, selection=_get_document_types, size=128),
here is the selection part function code
def _get_document_types(self, cr, uid, context=None):
cr.execute('select m.model, s.name from subscription_document s, ir_model m WHERE s.model = m.id order by s.name')
return cr.fetchall()
I Need to know that; can we make our own fields.reference type fields.?
another combination instead of MODEL,NAME..?
In the OpenERP framework a fields.reference field is a pseudo-many2one relationship that can target multiple models. That is, it contains the name of the target model in addition to the foreign key, so that each value can belong to a different table. The user interface first presents a drop-down where the user selects the target document model, and then a many2one widget in which they can pick the specific document from that model.
You can of course use it in your own modules, but it will always behave in this manner.
This is typically used for attaching various documents (similarly to attachments except the target is another record rather than a file). It's also used in some internal OpenERP models that need to be attached to different types of record, such as properties (fields.property values) that may belong to any record.
The fields.reference constructor takes 3 main parameters:
'doc': fields.reference('Field Label', selection, size)
where selection contains the list of document models from which values can be selected (e.g Partners, Products, etc.), in the same form as in a fields.selection declaration. The key of the selection values must be the model names (e.g. 'res.partner').
As of OpenERP 7.0 the size parameter should be None, unless you want to specifically restrict the size of the database field where the values will be stored, which is probably a bad idea. Technically, fields.reference values are stored as text in the form model.name,id. You won't be able to use these fields in a regular SQL JOIN, so they won't behave like many2one fields in many cases.
Main API calls
When you programmatically read() a non-null reference value you have to split it on ',' to identify the target model and target ID
When you programmatically write() a non-null reference value you need to pass the 'model.name,id' string.
When you search() for a non-null reference value you need to search for the 'model.name,id' string (e.g. in a search domain)
Finally, when you browse() through a reference value programmatically the framework will automatically dereference it and follow the relationship as with a regular many2one field - this is the main exception to the rule ;-)
Related
The Q&A here and here appear to indicate that the default sort order in the dropdown made by IS_IN_DB validator is determined by the format attribute of the referenced table. But in the following case, the default sort order is the 'id' of the referenced table:
db.define_table('bank',
Field('bank_code', 'string',
unique=True, required=True, label='Bank/FI Code'),
Field('bank_name', 'string',
required=True, label='Bank/FI Name'),
singular="Bank", plural="Banks",
format='%(bank_name)s'
)
db.bank.bank_code.requires=IS_UPPER()
db.bank.bank_name.requires=IS_UPPER()
db.define_table('bank_branch',
Field('bank', 'reference bank', label='Bank/FI'),
Field('branch_name', 'string', required=True, label='Branch Name'),
format=lambda r:'%s-%s' % (r.bank.bank_code, r.branch_name)
Even though the dropdown labels display the labels returned by the lambda function of the table bank_branch, they are sorted on its id field.
It is advised here to use IS_IN_SET for such situations, but what can be the explanation for the normal behaviour of sorting on the basis of 'format' attribute getting changed when such format is done by lambda function?
By default, when the IS_IN_DB validator generates the set of values and associated labels, it does not directly sort by the generated labels. Rather, in the database select, it specifies an ORDER BY clause that includes the fields used to generate the label. If the format attribute of the referenced table is a Python format string, the label fields are extracted from that format string in the order they appear. This has the effect of ordering the final set by the labels in that case.
However, if the format attribute of the referenced table is a function, IS_IN_DB does not know which fields are needed to generate the labels, so it simply selects all fields in the table and orders by all fields (in the order they appear in the table definition). In this case, because db.bank_branch.id is the first field in the table definition (though not defined explicitly), that is the first field in the ORDER BY clause, resulting in the options being ordered by the IDs of the db.bank_branch table.
If you want to force the options to be sorted by the generated labels, you can use the sort argument:
IS_IN_DB(db, 'bank_branch.id', db.bank_branch._format, sort=True)
As an aside, keep in mind that if there are many bank branches, this method of generating labels is somewhat inefficient, as the format function includes a recursive select (i.e., r.bank.brank_code), which does a separate select for every item in the list. An alternative would be to generate your own set of values and labels based on a join query and then use the IS_IN_SET validator (or use IS_IN_DB just for the validation, and specify the form widget and its options separately). Of course, at some point, there may be more branches than would be reasonable to include in a select input, in which case, you can use IS_IN_DB to do the validation but should use an alternative input widget (e.g., an Ajax autocomplete).
If i drag and drop a Kanban card from one column to another column (Anylysis to On progress) how can i detect that a card is moved ?
On every drag and drop event set_record() will be called
set_record: function(record) {
var self = this;
this.id = record.id;
this.values = {};
_.each(record, function(v, k) {
self.values[k] = {
value: v
};
});
this.record = this.transform_record(record);
},
src: https://github.com/odoo/odoo/blob/8.0/addons/web_kanban/static/src/js/kanban.js#L894
Basically In your case If you want to add drag and drop event on your kanban card so in this case you must have to set the one field which is as grouping of field like state field or any selection or many2one field
using default_group_by="company_id"
default_group_by is mainly used to grouping of kanban card on each stage
simmilarity as group_by operation on datbase table.
default_group_by :
whether the kanban view should be grouped if no grouping is specified via the action or the current research. Should be the name of the field to group by when no grouping is otherwise specified.
Potential Problem :
There is however a potential problem. Columns representing groups without any items will not be included. This means users won’t be able to move items to those absent groups, which is probably not what we intended.
Odoo has an answer for this ready - an optional model attribute called _group_by_full.
_group_by_full :
It should be a dictionary, mapping field names (of the fields you use for grouping) to methods returning information about all available groups for those fields.
class Store(models.Model):
#api.model
def company_groups(self, present_ids, domain, **kwargs):
companies = self.env['res.company'].search([]).name_get()
return companies, None
_name = 'store'
_group_by_full = {
'company_id': company_groups,
}
name = fields.Char()
company_id = fields.Many2many('res.company')
The code above ensures that when displaying store objects grouped by company_id, all available companies will be represented (and not only those already having stores).
The code above ensures that when displaying store objects grouped by company_id, all available companies will be represented (and not only those already having stores).
_group_by_full which is return a two element tuple:
First element :
A list of two element tuples, representing individual groups. Every tuple in the list needs to include the particular group’s value (in our example: id of a particular company) and a user friendly name for the group (in our example: company’s name). That’s why we can use the name_get method, since it returns a list of (object id, object name) tuples.
Second element :
A dictionary mapping the groups’ values to a boolean value, indicating whether the group should be folded in Kanban view. Not including a group in this dictionary has the same meaning as mapping it to False.
For example :
this version of company_groups method would make a group representing a company with id 1 folded in Kanban view:
You can also reference the crm module for a better understanding of crm.lead model as it is a good example of grouping of kanban records for the stage_id field.
Just refer to the below post :
https://www.odoo.com/documentation/8.0/reference/views.html#kanban
http://ludwiktrammer.github.io/odoo/odoo-grouping-kanban-view-empty.html
I hope my answer may be helpful to you :)
I have a table defined in web2py
db.define_table(
'pairing',
Field('user',writable=True,readable=True),
Field('uid', writable=True , readable=True)
)
This table needs to have user and uid combination being unique. I have looked through the web2py documentation , but there isn't direct way to define composite key .
How do we define composite way in web2py
It depends on what you are trying to do. By default, web2py automatically creates an auto-incrementing id field to serve as the primary key for each table, and that is the recommended approach whenever possible. If you are dealing with a legacy database with composite primary keys and cannot change the schema, you can specify a primarykey attribute, though with some limitations (as explained here):
db.define_table('pairing',
Field('user', writable=True, readable=True),
Field('uid', writable=True, readable=True),
primarykey=['user', 'uid'])
Perhaps instead you don't really need a true composite primary key, but you just need some way to ensure only unique pairs of user/uid values are inserted in the table. In that case, you can do so by specifying a properly constructed IS_NOT_IN_DB validator for one of the two fields:
db.define_table('pairing',
Field('user', writable=True, readable=True),
Field('uid', writable=True, readable=True))
db.pairing.uid.requires=IS_NOT_IN_DB(db(db.pairing.user==request.vars.user),
'pairing.uid')
That will make sure uid is unique among the set of records where user matches the new value of user being inserted (so the combination of user and uid must be unique). Note, validators (such as IS_NOT_IN_DB) are only applied when values are being inserted via a SQLFORM or using the .validate_and_insert() method, so the above won't work for arbitrary inserts into the table but is primarily intended for user input submissions.
You can also use SQL to set a multi-column unique constraint on the table (which you can do directly in the database or via the web2py .executesql() method). Even with such a constraint, though, you would still want to do some input validation within your application to avoid errors from the database.
I have been using a computed field to create/simulate a composite key. Taking the example from the above question, one can define the junction table as follows:
from md5 import md5
db.define_table( 'pairing',
Field('user', writable=True, readable=True),
Field('uid', writable=True, readable=True),
Field( 'user_uid_md5',
length=32,
unique=True,
writable=False,
readable=False,
compute=lambda row: md5("{0}:{1}".format(row.user,row.uid)).hexdigest()))
The user_uid_md5 field is automatically computed on insert and updates. The value of this field is the md5 hash of a string obtained from the two fields user and uid. This field is also marked as unique. So the database enforces uniqueness here and this works around the limitation pointed out by Anthony. This should also work to emulate composite keys with more than two fields. If you see any holes in this approach, please let me know.
Edit: Slight update to the way the md5 hash is computed to account for the case pointed out by Chen Levy in a comment below.
Web2py has several methods for calculated fields, but the documentation states that lazy fields "are not visualized by default in tables" because they don't come with attributes like _. In fact, they don't seem to be able to be available in SQLFORM.grid even if the field is requested. I get the error
AttributeError: 'FieldLazy' object has no attribute 'readable'
When I include a lazy field in the field list.
db.mytable.myfield = Field.Lazy(lambda row: "calc")
Can I put a lazy field into a grid?
What is the recommended way to display a grid that includes calculated fields.
Unfortunately, I don't think there is an easy way to display virtual fields in SQLFORM.grid. What you can do is use the "links" argument and add each virtual field as a link (if "links" is a dictionary, each item will become a separate column in the grid).
links=[dict(header='myfield', body=lambda row: row.myfield)]
Note, in this case, you cannot specify the "fields" argument (i.e., you cannot specify only a subset of the fields for inclusion in the grid) -- this is because the virtual field function needs all the fields in order to work. If you need to hide some of the fields, you can instead set their "readable" attibute to False.
Another option might be computed fields.
I created a new property for my db model in the Google App Engine Datastore.
Old:
class Logo(db.Model):
name = db.StringProperty()
image = db.BlobProperty()
New:
class Logo(db.Model):
name = db.StringProperty()
image = db.BlobProperty()
is_approved = db.BooleanProperty(default=False)
How to query for the Logo records, which to not have the 'is_approved' value set?
I tried
logos.filter("is_approved = ", None)
but it didn't work.
In the Data Viewer the new field values are displayed as .
According to the App Engine documentation on Queries and Indexes, there is a distinction between entities that have no value for a property, and those that have a null value for it; and "Entities Without a Filtered Property Are Never Returned by a Query." So it is not possible to write a query for these old records.
A useful article is Updating Your Model's Schema, which says that the only currently-supported way to find entities missing some property is to examine all of them. The article has example code showing how to cycle through a large set of entities and update them.
A practice which helps us is to assign a "version" field on every Kind. This version is set on every record initially to 1. If a need like this comes up (to populate a new or existing field in a large dataset), the version field allows iteration through all the records containing "version = 1". By iterating through, setting either a "null" or another initial value to the new field, bump the version to 2, store the record, allows populating the new or existing field with a default value.
The benefit to the "version" field is that the selection process can continue to select against that lower version number (initially set to 1) over as many sessions or as much time is needed until ALL records are updated with the new field default value.
Maybe this has changed, but I am able to filter records based on null fields.
When I try the GQL query SELECT * FROM Contact WHERE demo=NULL, it returns only records for which the demo field is missing.
According to the doc http://code.google.com/appengine/docs/python/datastore/gqlreference.html:
The right-hand side of a comparison can be one of the following (as
appropriate for the property's data type): [...] a Boolean literal, as TRUE or
FALSE; the NULL literal, which represents the null value (None in
Python).
I'm not sure that "null" is the same as "missing" though : in my case, these fields already existed in my model but were not populated on creation. Maybe Federico you could let us know if the NULL query works in your specific case?