I have a table defined in web2py
db.define_table(
'pairing',
Field('user',writable=True,readable=True),
Field('uid', writable=True , readable=True)
)
This table needs to have user and uid combination being unique. I have looked through the web2py documentation , but there isn't direct way to define composite key .
How do we define composite way in web2py
It depends on what you are trying to do. By default, web2py automatically creates an auto-incrementing id field to serve as the primary key for each table, and that is the recommended approach whenever possible. If you are dealing with a legacy database with composite primary keys and cannot change the schema, you can specify a primarykey attribute, though with some limitations (as explained here):
db.define_table('pairing',
Field('user', writable=True, readable=True),
Field('uid', writable=True, readable=True),
primarykey=['user', 'uid'])
Perhaps instead you don't really need a true composite primary key, but you just need some way to ensure only unique pairs of user/uid values are inserted in the table. In that case, you can do so by specifying a properly constructed IS_NOT_IN_DB validator for one of the two fields:
db.define_table('pairing',
Field('user', writable=True, readable=True),
Field('uid', writable=True, readable=True))
db.pairing.uid.requires=IS_NOT_IN_DB(db(db.pairing.user==request.vars.user),
'pairing.uid')
That will make sure uid is unique among the set of records where user matches the new value of user being inserted (so the combination of user and uid must be unique). Note, validators (such as IS_NOT_IN_DB) are only applied when values are being inserted via a SQLFORM or using the .validate_and_insert() method, so the above won't work for arbitrary inserts into the table but is primarily intended for user input submissions.
You can also use SQL to set a multi-column unique constraint on the table (which you can do directly in the database or via the web2py .executesql() method). Even with such a constraint, though, you would still want to do some input validation within your application to avoid errors from the database.
I have been using a computed field to create/simulate a composite key. Taking the example from the above question, one can define the junction table as follows:
from md5 import md5
db.define_table( 'pairing',
Field('user', writable=True, readable=True),
Field('uid', writable=True, readable=True),
Field( 'user_uid_md5',
length=32,
unique=True,
writable=False,
readable=False,
compute=lambda row: md5("{0}:{1}".format(row.user,row.uid)).hexdigest()))
The user_uid_md5 field is automatically computed on insert and updates. The value of this field is the md5 hash of a string obtained from the two fields user and uid. This field is also marked as unique. So the database enforces uniqueness here and this works around the limitation pointed out by Anthony. This should also work to emulate composite keys with more than two fields. If you see any holes in this approach, please let me know.
Edit: Slight update to the way the md5 hash is computed to account for the case pointed out by Chen Levy in a comment below.
Related
I am trying to update new password after reset to cassandra db. This is the query I have written where both username and password fields are dynamic. Is this right?
def update_db(uname, pwd):
query = session.prepare('update user.userdetails set "password"=%s where "username" = ? ALLOW FILTERING', pwd)
session.execute(query, (uname,))
update_db(username, new_pwd)
I am calling this through an API. But it doesn't seem to update.
Alex is absolutely correct in that you need to provide the complete PRIMARY KEY for any write operation. Remove ALLOW FILTERING and your query should work as long as your primary key definition is: PRIMARY KEY (username).
Additionally, it's best practice to parameterize your entire prepared statement, instead of relying on string formatting for password.
query = session.prepare('update user.userdetails set "password"=? where "username"=?')
session.execute(query,[pwd,uname])
Note: If at any point you find yourself needing the ALLOW FILTERING directive, you're doing it wrong.
for updating record you need to provide primary key(s) completely. It will not work with ALLOW FILTERING - you need first to get all primary keys that you want to update, and then issue individual update commands. See the documentation for more detailed description of UPDATE command.
If you really want to specify the default value for some column - why not simply handle it with something like .get('column', 'default-value')?
I am using web2py (python) with sqlite3 database (test flowers database :) ). Here is the declaration of the table:
db.define_table('flower',
Field('code', type='string', length=4, required=True, unique=True),
Field('name', type='string', length=100, required=True),
Field('description', type='string', length=250, required=False),
Field('price', type='float', required=True),
Field('photo', 'upload'));
Which translates into correct SQL in sql.log:
CREATE TABLE flower(
id INTEGER PRIMARY KEY AUTOINCREMENT,
code CHAR(4),
name CHAR(200),
description CHAR(250),
price CHAR(5),
photo CHAR(512)
);
But when I insert a value of "code" field that's greater than 4 chars, it still inserts. I tried setting to CHAR(10) (simple test, I guess) with the same result.
>>>db.flower.insert(code="123456789999", name="flower2", description="test flower 2", price="5.00");
>>>1L;
The same problem applies to all field where I set the length. I also tried validation (although, I am not 100% on correct use of it). This is also within flower model flowers.py where the table is defined and follows table declaration:
db.flower.code.requires = [ IS_NOT_EMPTY(), IS_LENGTH(4), IS_NOT_IN_DB(db, 'flower.code')]
Documentation on this is here, but I can't find anything that's limiting SQLite3 or web2py length check of the string. I would expect to see an error on insert.
Would appreciate some help on this? What did I miss in the documentation? I used symphony2 with PHP and MySQL before and would expect similar behaviour here.
SQLite is not like other databases. For all (most) practical purposes columns are untyped and INSERTs will always succeed and not lose data or precision (meaning, you can INSERT a text value into a REAL field if you want).
The declared type of the column is used for a system called "type affinity", which is described here: https://www.sqlite.org/datatype3.html.
Once you get used to it, it's kind of fun -- but definitely not what you'd expect!
You have to perform length checking in your code before issuing the INSERT.
As already mentioned, SQLite does not enforce character field length declarations (see https://www.sqlite.org/faq.html#q9). Furthermore, the IS_LENGTH validator is only applied if you do the insert via a SQLFORM submission or via the .validate_and_insert method -- if you just use the .insert method, the validators stored in the requires attribute are not applied, so you will get no error.
I am working on a database software for a user with a need to store generic "objects" and associated information. (With a web interface, to abstract the SQL)
First, what I mean by "object", in this situation, is an "thing" that can be defined by an arbitrary amount of input fields, for example:
A "Customer" object might be defined by
Name
Phone Number
Email
Company
etc...
These objects are to be designed by the User. Therefore, I need a way to define an arbitrary object in SQL.
The solution I have come up with so far is something along the lines of this:
CREATE TABLE object_name
(
p_key SERIAL PRIMARY KEY,
name varchar UNIQUE NOT NULL
);
CREATE TABLE field_type
(
p_key SERIAL PRIMARY KEY,
type varchar UNIQUE NOT NULL
);
CREATE TABLE field_name
(
p_key SERIAL PRIMARY KEY,
name varchar UNIQUE NOT NULL
);
CREATE TABLE object
(
object_name_id integer REFERENCES object_name (p_key),
field_type_id integer REFERENCES field_type (p_key),
field_name_id integer REFERENCES field_name (p_key),
PRIMARY KEY (object_name_id, field_type_id, field_name_id)
);
The basic idea is to use a junction table to define the amount/type of fields being used to define the object. (This is just an extremely basic example)
I have a few problems that I am running into with this solution.
The user wants the field of an object to reference another object. For example, in the "Customer" example I showed earlier, the "Company" field would likely be a selection from a list of "Company" objects. This causes an issue, because a situation can be imagined where "Company" would also have a field referencing "Customer". Meaning, there would be circular referencing... which object would be created first?
Is using SQL for defining the objects even a reasonable approach? Or would defining an object be better suited to XML files?
Assuming the objects can be arbitrarily defined, what would be the best way of storing instances of the arbitrary objects?
Any breadcrumbs would be helpful. Sidenote: I was thinking about using Django and postgresql as the framework.
Thank you.
I saw this code segment in subscription.py class. It gives selection and many2one fields together for users. I found in openerp documentation and another modules also but i never found any details or other samples for this
here is the its view
here is the code related to that field
'doc_source': fields.reference('Source Document', required=True, selection=_get_document_types, size=128),
here is the selection part function code
def _get_document_types(self, cr, uid, context=None):
cr.execute('select m.model, s.name from subscription_document s, ir_model m WHERE s.model = m.id order by s.name')
return cr.fetchall()
I Need to know that; can we make our own fields.reference type fields.?
another combination instead of MODEL,NAME..?
In the OpenERP framework a fields.reference field is a pseudo-many2one relationship that can target multiple models. That is, it contains the name of the target model in addition to the foreign key, so that each value can belong to a different table. The user interface first presents a drop-down where the user selects the target document model, and then a many2one widget in which they can pick the specific document from that model.
You can of course use it in your own modules, but it will always behave in this manner.
This is typically used for attaching various documents (similarly to attachments except the target is another record rather than a file). It's also used in some internal OpenERP models that need to be attached to different types of record, such as properties (fields.property values) that may belong to any record.
The fields.reference constructor takes 3 main parameters:
'doc': fields.reference('Field Label', selection, size)
where selection contains the list of document models from which values can be selected (e.g Partners, Products, etc.), in the same form as in a fields.selection declaration. The key of the selection values must be the model names (e.g. 'res.partner').
As of OpenERP 7.0 the size parameter should be None, unless you want to specifically restrict the size of the database field where the values will be stored, which is probably a bad idea. Technically, fields.reference values are stored as text in the form model.name,id. You won't be able to use these fields in a regular SQL JOIN, so they won't behave like many2one fields in many cases.
Main API calls
When you programmatically read() a non-null reference value you have to split it on ',' to identify the target model and target ID
When you programmatically write() a non-null reference value you need to pass the 'model.name,id' string.
When you search() for a non-null reference value you need to search for the 'model.name,id' string (e.g. in a search domain)
Finally, when you browse() through a reference value programmatically the framework will automatically dereference it and follow the relationship as with a regular many2one field - this is the main exception to the rule ;-)
I created a new property for my db model in the Google App Engine Datastore.
Old:
class Logo(db.Model):
name = db.StringProperty()
image = db.BlobProperty()
New:
class Logo(db.Model):
name = db.StringProperty()
image = db.BlobProperty()
is_approved = db.BooleanProperty(default=False)
How to query for the Logo records, which to not have the 'is_approved' value set?
I tried
logos.filter("is_approved = ", None)
but it didn't work.
In the Data Viewer the new field values are displayed as .
According to the App Engine documentation on Queries and Indexes, there is a distinction between entities that have no value for a property, and those that have a null value for it; and "Entities Without a Filtered Property Are Never Returned by a Query." So it is not possible to write a query for these old records.
A useful article is Updating Your Model's Schema, which says that the only currently-supported way to find entities missing some property is to examine all of them. The article has example code showing how to cycle through a large set of entities and update them.
A practice which helps us is to assign a "version" field on every Kind. This version is set on every record initially to 1. If a need like this comes up (to populate a new or existing field in a large dataset), the version field allows iteration through all the records containing "version = 1". By iterating through, setting either a "null" or another initial value to the new field, bump the version to 2, store the record, allows populating the new or existing field with a default value.
The benefit to the "version" field is that the selection process can continue to select against that lower version number (initially set to 1) over as many sessions or as much time is needed until ALL records are updated with the new field default value.
Maybe this has changed, but I am able to filter records based on null fields.
When I try the GQL query SELECT * FROM Contact WHERE demo=NULL, it returns only records for which the demo field is missing.
According to the doc http://code.google.com/appengine/docs/python/datastore/gqlreference.html:
The right-hand side of a comparison can be one of the following (as
appropriate for the property's data type): [...] a Boolean literal, as TRUE or
FALSE; the NULL literal, which represents the null value (None in
Python).
I'm not sure that "null" is the same as "missing" though : in my case, these fields already existed in my model but were not populated on creation. Maybe Federico you could let us know if the NULL query works in your specific case?