Auto_increment custom Primary Key in Peewee model - python

I want a primary key id field to be Bigint
class Tweets(Model):
id = BigIntegerField(primary_key=True)
...
But it needs to be auto_incremented and I can't find a way in the Peewee docs.
Please suggest if it's possible.
Update: I'm using MySql db.

Peewee automatically generates an integer id column serving as primary key, having the auto_increment property. This is true for any table you create with Peewee.
It is very likely that IntegerField is enough for your needs; BigIntegerField is very rarely useful. Will you really need numbers bigger than 2147483647? Will you insert more than two billion rows?
See: http://dev.mysql.com/doc/refman/5.5/en/integer-types.html

Peewee, as of 3.1, includes a BigAutoField which is an auto-incrementing integer field using 64-bit integer storage. Should do the trick:
http://docs.peewee-orm.com/en/latest/peewee/api.html#BigAutoField

I think the most convenience answer is by using SQL constraints:
import peewee
class MyModel(peewee.Model):
id = peewee.BigIntegerField(primary_key=True, unique=True,
constraints=[peewee.SQL('AUTO_INCREMENT')])

Looks like this should help.
After creating table, do:
db.register_fields({'primary_key': 'BIGINT AUTOINCREMENT'})
After that when you say
class Tweets(Model):
id = PrimaryKey()
...
class Meta():
db = db
Then in mysql that field will appear as BigInt with auto increment

Related

How set start of auto increment in flask-sqlalchemy [duplicate]

The autoincrement argument in SQLAlchemy seems to be only True and False, but I want to set the pre-defined value aid = 1001, the via autoincrement aid = 1002 when the next insert is done.
In SQL, can be changed like:
ALTER TABLE article AUTO_INCREMENT = 1001;
I'm using MySQL and I have tried following, but it doesn't work:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Article(Base):
__tablename__ = 'article'
aid = Column(INTEGER(unsigned=True, zerofill=True),
autoincrement=1001, primary_key=True)
So, how can I get that? Thanks in advance!
You can achieve this by using DDLEvents. This will allow you to run additional SQL statements just after the CREATE TABLE ran. Look at the examples in the link, but I am guessing your code will look similar to below:
from sqlalchemy import event
from sqlalchemy import DDL
event.listen(
Article.__table__,
"after_create",
DDL("ALTER TABLE %(table)s AUTO_INCREMENT = 1001;")
)
According to the docs:
autoincrement –
This flag may be set to False to indicate an integer primary key column that should not be considered to be the “autoincrement” column, that is the integer primary key column which generates values implicitly upon INSERT and whose value is usually returned via the DBAPI cursor.lastrowid attribute. It defaults to True to satisfy the common use case of a table with a single integer primary key column.
So, autoincrement is only a flag to let SQLAlchemy know whether it's the primary key you want to increment.
What you're trying to do is to create a custom autoincrement sequence.
So, your example, I think, should look something like:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.schema import Sequence
Base = declarative_base()
class Article(Base):
__tablename__ = 'article'
aid = Column(INTEGER(unsigned=True, zerofill=True),
Sequence('article_aid_seq', start=1001, increment=1),
primary_key=True)
Note, I don't know whether you're using PostgreSQL or not, so you should make note of the following if you are:
The Sequence object also implements special functionality to accommodate Postgresql’s SERIAL datatype. The SERIAL type in PG automatically generates a sequence that is used implicitly during inserts. This means that if a Table object defines a Sequence on its primary key column so that it works with Oracle and Firebird, the Sequence would get in the way of the “implicit” sequence that PG would normally use. For this use case, add the flag optional=True to the Sequence object - this indicates that the Sequence should only be used if the database provides no other option for generating primary key identifiers.
I couldn't get the other answers to work using mysql and flask-migrate so I did the following inside a migration file.
from app import db
db.engine.execute("ALTER TABLE myDB.myTable AUTO_INCREMENT = 2000;")
Be warned that if you regenerated your migration files this will get overwritten.
I know this is an old question but I recently had to figure this out and none of the available answer were quite what I needed. The solution I found relied on Sequence in SQLAlchemy. For whatever reason, I could not get it to work when I called the Sequence constructor within the Column constructor as has been referenced above. As a note, I am using PostgreSQL.
For your answer I would have put it as such:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Sequence, Column, Integer
import os
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Sequence, Integer, create_engine
Base = declarative_base()
def connection():
engine = create_engine(f"postgresql://postgres:{os.getenv('PGPASSWORD')}#localhost:{os.getenv('PGPORT')}/test")
return engine
engine = connection()
class Article(Base):
__tablename__ = 'article'
seq = Sequence('article_aid_seq', start=1001)
aid = Column('aid', Integer, seq, server_default=seq.next_value(), primary_key=True)
Base.metadata.create_all(engine)
This then can be called in PostgreSQL with:
insert into article (aid) values (DEFAULT);
select * from article;
aid
------
1001
(1 row)
Hope this helps someone as it took me a while
You can do it using the mysql_auto_increment table create option. There are mysql_engine and mysql_default_charset options too, which might be also handy:
article = Table(
'article', metadata,
Column('aid', INTEGER(unsigned=True, zerofill=True), primary_key=True),
mysql_engine='InnoDB',
mysql_default_charset='utf8',
mysql_auto_increment='1001',
)
The above will generate:
CREATE TABLE article (
aid INTEGER UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT,
PRIMARY KEY (aid)
)ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8
If your database supports Identity columns*, the starting value can be set like this:
import sqlalchemy as sa
tbl = sa.Table(
't10494033',
sa.MetaData(),
sa.Column('id', sa.Integer, sa.Identity(start=200, always=True), primary_key=True),
)
Resulting in this DDL output:
CREATE TABLE t10494033 (
id INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 200),
PRIMARY KEY (id)
)
Identity(..) is ignored if the backend does not support it.
* PostgreSQL 10+, Oracle 12+ and MSSQL, according to the linked documentation above.

Django Migration Error with MySQL: BLOB/TEXT column 'id' used in key specification without a key length"

We have Django Model, use Binary Field for ID.
# Create your models here.
class Company(models.Model):
id = models.BinaryField(max_length=16, primary_key=True)
name = models.CharField(max_length=12)
class Meta:
db_table = "company"
We use MySQL Database and have error when migrate.
File "/home/cuongtran/Downloads/sample/venv/lib/python3.5/site-packages/MySQLdb/connections.py", line 270, in query
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (1170, "BLOB/TEXT column 'id' used in key specification without a key length")
Do you have any solution? We need to use MySQL and want to use the Binary Field for ID.
Thank you!
I think you cannot achieve this. Based on Django documentation it looks like use of binary fields is discouraged
A field to store raw binary data. It only supports bytes assignment.
Be aware that this field has limited functionality. For example, it is
not possible to filter a queryset on a BinaryField value. It is also
not possible to include a BinaryField in a ModelForm.
Abusing BinaryField
Although you might think about storing files in the database, consider
that it is bad design in 99% of the cases. This field is not a
replacement for proper static files handling.
And based on a Django bug, it is most likely impossible to achieve a unique value restriction on a binary field. This bug is marked as wont-fix. I am saying most likely impossible as I did not find evidence to confirm that binary field is stored as a BLOB field but the error does allude to it.
Description
When I used a field like this:
text = models.TextField(maxlength=2048, unique=True)
it results in the following sql error when the admin app goes to make the table
_mysql_exceptions.OperationalError: (1170, "BLOB/TEXT column 'text' used in key specification without a key length")
After a bit of investigation, it turns out that mysql refuses to use unique with the column unless it is only for an indexed part of the text field:
CREATE TABLE `quotes` ( \`id\` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `text` longtext NOT NULL , \`submitTS\` datetime NOT NULL, `submitIP` char(15) NOT NULL, `approved` bool NOT NULL, unique (text(1000)));
Of course 1000 is just an arbitrary number I chose, it happens to be the maximum my database would allow. Not entirely sure how this can be fixed, but I figured it was worth mentioning.
MySQL restricts the primary key on BLOB/TEXT column to first N chars, when you generates migration file using Django's makemigrations command, BinaryField in Django is mapped to longblob which is BLOB column in MySQL without specifying the key length.
Which means your Django model definition :
class Company(models.Model):
id = models.BinaryField(max_length=16, primary_key=True)
name = models.CharField(max_length=12)
class Meta:
db_table = "company"
will be converted to SQL expression that causes this error (You can check out the detailed SQL expressions by sqlmigrate command) :
CREATE TABLE `company` (`id` longblob NOT NULL PRIMARY KEY,
`name` varchar(12) NOT NULL);
while the correct SQL expression for MySQL should be like this :
CREATE TABLE `company` (`id` longblob NOT NULL,
`name` varchar(12) NOT NULL);
ALTER TABLE `company` ADD PRIMARY KEY (id(16));
where PRIMARY KEY (id(16)) comes from your id length in the BLOB column, used to structure primary key index of the table.
So the easiest solution is as described in the accepted answer -- avoid BinaryField in Django as primary key, or you can manually add raw SQL scripts to your migration file if you really need BinaryField (BLOB column) to be primary key and you are sure the id field will NOT go beyond the specific size (in your case, 16 bytes).

Django - Raw query must include the primary key

There is a similar question here - Raw query must include the primary key
However I'm running off of a legacy DB and therefore can't figure out what the issue is with the Primary Key.
This is my RAW query -
trg = Trgjob.objects.db_manager('AdmiralDEV').raw("""
SELECT jobdep_id, jm.jobmst_id, jobdep_type, (jm1.jobmst_prntname + '\' + jm1.jobmst_name) AS jobdep_jobmst,
jobdep_operator, jobdep_status, jobdep_joblogic, jobdep_ingroup, jobdep_dateoffset, jobdep_instoffset,
jobdep_canignore, jobdep_filename, jobdep_filetype, jobdep_fileextent, nodmst_id, varmst_id, jobdep_value
FROM Jobdep jd
INNER JOIN Jobmst jm ON jd.jobmst_id = jm.jobmst_id
INNER JOIN Jobmst jm1 ON jd.jobdep_jobmst = jm1.jobmst_id
WHERE jm.jobmst_id = 9878""")
On the DB works fine, but in django I get the following failure -
Raw query must include the primary key
The primary key on this model is "jobdep_id" as seen in the models.py here -
class Jobdep(models.Model):
jobdep_id = models.IntegerField(primary_key=True)
Try to write query as:
"SELECT jobdep_id AS id ..."
maybe it helps.
If you use Manager.raw() it's required to provide the id. (https://docs.djangoproject.com/en/2.2/topics/db/sql/#performing-raw-sql-queries)
There is only one field that you can’t leave out - the primary key
field. Django uses the primary key to identify model instances, so it
must always be included in a raw query. An InvalidQuery exception will
be raised if you forget to include the primary key.
But you can execute custom SQL directly to avoid this. See more on Django documentation
https://docs.djangoproject.com/en/2.2/topics/db/sql/#executing-custom-sql-directly
The issue was indeed my models.py I had to update it as follows -
class Jobdep(models.Model):
jobdep_id = models.IntegerField(db_column='jobdep_id', primary_key=True)

Is there are standard way to store a database schema outside a python app

I am working on a small database application in Python (currently targeting 2.5 and 2.6) using sqlite3.
It would be helpful to be able to provide a series of functions that could setup the database and validate that it matches the current schema. Before I reinvent the wheel, I thought I'd look around for libraries that would provide something similar. I'd love to have something akin to RoR's migrations. xml2ddl doesn't appear to be meant as a library (although it could be used that way), and more importantly doesn't support sqlite3. I'm also worried about the need to move to Python 3 one day given the lack of recent attention to xml2ddl.
Are there other tools around that people are using to handle this?
You can find the schema of a sqlite3 table this way:
import sqlite3
db = sqlite3.connect(':memory:')
c = db.cursor()
c.execute('create table foo (bar integer, baz timestamp)')
c.execute("select sql from sqlite_master where type = 'table' and name = 'foo'")
r=c.fetchone()
print(r)
# (u'CREATE TABLE foo (bar integer, baz timestamp)',)
Take a look at SQLAlchemy migrate. I see no problem using it as migration tool only, but comparing of configuration to current database state is experimental yet.
I use this to keep schemas in sync.
Keep in mind that it adds a metadata table to keep track of the versions.
South is the closest I know to RoR migrations. But just as you need Rails for those migrations, you need django to use south.
Not sure if it is standard but I just saved all my schema queries in a txt file like so (tables_creation.txt):
CREATE TABLE "Jobs" (
"Salary" TEXT,
"NumEmployees" TEXT,
"Location" TEXT,
"Description" TEXT,
"AppSubmitted" INTEGER,
"JobID" INTEGER NOT NULL UNIQUE,
PRIMARY KEY("JobID")
);
CREATE TABLE "Questions" (
"Question" TEXT NOT NULL,
"QuestionID" INTEGER NOT NULL UNIQUE,
PRIMARY KEY("QuestionID" AUTOINCREMENT)
);
CREATE TABLE "FreeResponseQuestions" (
"Answer" TEXT,
"FreeResponseQuestionID" INTEGER NOT NULL UNIQUE,
PRIMARY KEY("FreeResponseQuestionID"),
FOREIGN KEY("FreeResponseQuestionID") REFERENCES "Questions"("QuestionID")
);
...
Then I used this function taking advantage of the fact that I made each query delimited by two newline characters:
def create_db_schema(self):
db_schema = open("./tables_creation.txt", "r")
sql_qs = db_schema.read().split('\n\n')
c = self.conn.cursor()
for sql_q in sql_qs:
c.execute(sql_q)

Can you achieve a case insensitive 'unique' constraint in Sqlite3 (with Django)?

So let's say I'm using Python 2.5's built-in default sqlite3 and I have a Django model class with the following code:
class SomeEntity(models.Model):
some_field = models.CharField(max_length=50, db_index=True, unique=True)
I've got the admin interface setup and everything appears to be working fine except that I can create two SomeEntity records, one with some_field='some value' and one with some_field='Some Value' because the unique constraint on some_field appears to be case sensitive.
Is there some way to force sqlite to perform a case insensitive comparison when checking for uniqueness?
I can't seem to find an option for this in Django's docs and I'm wondering if there's something that I can do directly to sqlite to get it to behave the way I want. :-)
Yes this can easily be done by adding a unique index to the table with the following command:
CREATE UNIQUE INDEX uidxName ON mytable (myfield COLLATE NOCASE)
If you need case insensitivity for nonASCII letters, you will need to register your own COLLATION with commands similar to the following:
The following example shows a custom collation that sorts “the wrong way”:
import sqlite3
def collate_reverse(string1, string2):
return -cmp(string1, string2)
con = sqlite3.connect(":memory:")
con.create_collation("reverse", collate_reverse)
cur = con.cursor()
cur.execute("create table test(x)")
cur.executemany("insert into test(x) values (?)", [("a",), ("b",)])
cur.execute("select x from test order by x collate reverse")
for row in cur:
print row
con.close()
Additional python documentation for sqlite3 shown here
Perhaps you can create and use a custom model field; it would be a subclass of CharField but providing a db_type method returning "text collate nocase"
For anyone in 2021, with the help of Django 4.0 UniqueConstraint expressions you could add a Meta class to your model like this:
class Meta:
constraints = [
models.UniqueConstraint(
Lower('<field name>'),
name='<constraint name>'
),
]

Categories

Resources