How to have Django insert a sequence value into an Oracle table? - python

How can I instruct Django to call nextval on a sequence for a given model's field?
I realize that I can make a trigger in the DB:
CREATE TRIGGER foo_trg
BEFORE INSERT ON foo FOR EACH ROW
BEGIN
SELECT foo_id_seq.NEXTVAL INTO :new.foo_id FROM dual;
END;
However I'm curious if Django can do it via configuration like MySQL and autoincrement.
I didn't see anything specified in the Django Oracle notes.

Oracle 12c has introduced two new features which relates to mysql autoincrement.
Default values using sequence
CREATE SEQUENCE seq;
CREATE TABLE t (
x number default seq.nextval, /*-- no need for a trigger to autoincrement */.
y varchar2(10)
);
http://docs.oracle.com/database/121/SQLRF/pseudocolumns002.htm#SQLRF50946
Identity columns
CREATE TABLE t (
x number generated by default on null as identity,
y varchar2(10)
);
http://docs.oracle.com/database/121/SQLRF/statements_7002.htm#SQLRF01402

Related

Unable to insert a row in SQL Server table using Python SQLAlchemy (PK not set as IDENTITY) [duplicate]

This question already has answers here:
Prevent SQLAlchemy from automatically setting IDENTITY_INSERT
(4 answers)
Closed last year.
Have this Python Flask SQLAlchemy app that fetch data from a third party SQL Server database.
There is a table with to columns that I need to insert rows:
TABLE [dbo].[TableName](
[Id] [bigint] NOT NULL,
[Desc] [varchar](150) NOT NULL,
CONSTRAINT [PK_Id] PRIMARY KEY CLUSTERED ...
The primary key is not set as IDENTITY
Using SQLAlchemy ORM, if I try to add a new row without an explicit value for Id field, I have this error:
sqlalchemy.exc.IntegrityError: (pyodbc.IntegrityError) ('23000', "[23000] ...
The column not allow Null values* (translated text)
If I explicit an Id value, another error occurs:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] ...*
It is not possible to find the object "dbo.TableName", because it not exists or you don't have permissions (translated text)
This error is followed by the sentence:
[SQL: SET IDENTITY_INSERT dbo.[TableName] ON]
I'm supposing SQLAlchemy is trying to execute this command, but as Id is not set as IDENTITY, there's no need for that.
Using SQL Server Management Studio, with the same user of pyodbc connection, I'm able to insert new records, choosing whatever value for Id.
I would appreciate any hint.
Your INSERT will fail because a value must be defined for the primary key column of a table, either explicitly in your INSERT or implicitly by way of an IDENTITY property.
This requirement is due to the nature of primary keys and cannot be subverted. Further, you are unable to insert a NULL because the table definition explicitly disallows NULLs in that column.
You must provide a value in your INSERT statement explicitly due to the combination of design factors present.
Based on the documentation (https://docs-sqlalchemy.readthedocs.io/ko/latest/dialects/mssql.html#:~:text=The%20SQLAlchemy%20dialect%20will%20detect%20when%20an%20INSERT,OFF%20subsequent%20to%20the%20execution.%20Given%20this%20example%3A), it appears that SqlAlchemy may be assuming that column is an IDENTITY and is attempting to toggle IDENTITY_INSERT to on. As it is not an identity column, it is encountering an exception.
In your table metadata, check that you have autoincrement=False set for the Id column.
Edit to add: According to comments in an answer on a related question (Prevent SQLAlchemy from automatically setting IDENTITY_INSERT), it appears that SqlAlchemy assumes all integer-valued primary keys to be identity, auto-incrementing as well - meaning that you need to explicitly override that assumption as described here.

How to avoid explicit casting NULL during INSERT in Postgresql

I am writing python scripts to sychronize tables from a MSSQL database to a Postgresql DB. The original author tends to use super wide tables with a lot of regional consecutive NULL holes in them.
For insertion speed, I serialized the records in bulk to string in the following form before execute()
INSERT INTO A( {col_list} )
SELECT * FROM ( VALUES (row_1), (row_2),...) B( {col_list} )
During the row serialization, its not possbile to determin the data type of NULL or None in python. This makes the job complicated. All NULL values in timestamp columns, integer columns etc need explicit type cast into proper types, or Pg complains about it.
Currently I am checking the DB API connection.description property and compare column type_code, for every column and add type casting like ::timestamp as needed.
But this feels cumbersome, with the extra work: the driver already converted the data from text to proper python data type, now I have to redo it for column with those many Nones.
Is there any better way to work around this with elegancy & simplicity ?
If you don't need the SELECT, go with #Nick's answer.
If you need it (like with a CTE to use the input rows multiple times), there are workarounds depending on the details of your use case.
Example, when working with complete rows:
INSERT INTO A -- complete rows
SELECT * FROM (
VALUES ((NULL::A).*), (row_1), (row_2), ...
) B
OFFSET 1;
{col_list} is optional noise in this particular case, since we need to provide complete rows anyway.
Detailed explanation:
Casting NULL type when updating multiple rows
Instead of inserting from a SELECT, you can attach a VALUES clause directly to the INSERT, i.e.:
INSERT INTO A ({col_list})
VALUES (row_1), (row_2), ...
When you insert from a query, Postgres examines the query in isolation when trying to infer the column types, and then tries to coerce them to match the target table (only to find out that it can't).
When you insert directly from a VALUES list, it knows about the target table when performing the type inference, and can then assume that any untyped NULL matches the corresponding column.
You could try to create json from data and then rowset from json using json_populate_record(..).
postgres=# create table js_test (id int4, dat timestamp, val text);
CREATE TABLE
postgres=# insert into js_test
postgres-# select (json_populate_record(null::js_test,
postgres(# json_object(array['id', 'dat', 'val'], array['5', null, 'test']))).*;
INSERT 0 1
postgres=# select * from js_test;
id | dat | val
----+-----+------
5 | | test
You can use json_populate_recordset(..) to do the same with multiple rows in one go. You just pass json value that is array of json. Make sure it isn't array of json.
So this is OK: '[{"id":1,"dat":null,"val":6},{"id":3,"val":"tst"}]'::json
This is not: array['{"id":1,"dat":null,"val":6}'::json,'{"id":3,"val":"tst"}'::json]
select *
from json_populate_recordset(null::js_test,
'[{"id":1,"dat":null,"val":6},{"id":3,"val":"tst"}]')

How do I get the “id” after INSERT into a postgres database with Python?

I am doing an INSERT into a postgres DB, which has an auto increment field called id.
How do I get the value of id after doing the insert in Python? There are good references on this site to this for MySQL databases (i.e. using cursor.lastrowid, or connection.insert_id()) but these don't seem to work with my postgres DB.
The id field is using a sequence to auto increment. So the id field is not null, default, nextval('table_id_seq')
Thanks
Pls try that:
cursor.execute("INSERT INTO .... RETURNING
id_of_new_row = cursor.fetchone()[0]

How do I copy Unique constraint in Oracle with SQLAlchemy?

I have a table (on which I have no control) that I must copy. The target schema can be the same as the original one, so all indexes and constraints have to be defined without a name, implicitly.
I'm using Python 3.4.3 with SQLAlchemy 1.0.8 and cx_oracle 5.2.
The table is like this:
CREATE TABLE "MY_TABLE"
( "ITEMID" NUMBER(*,0) NOT NULL ENABLE,
"LABEL" NVARCHAR2(80) NOT NULL ENABLE,
"FIRSTCHILDID" NUMBER(*,0) NOT NULL ENABLE,
"LASTCHILDID" NUMBER(*,0) NOT NULL ENABLE,
"DEFAULTPARENTID" NUMBER(*,0) NOT NULL ENABLE,
"PICTUREID" NUMBER(6,0) NOT NULL ENABLE,
"SECURITYID" NUMBER(*,0) NOT NULL ENABLE,
PRIMARY KEY ("ITEMID")
UNIQUE ("LABEL"));
The code I'm using is at https://gist.github.com/toyg/9fb541ff3dbc8c175329 but the core of it is this (smeta and dmeta are source and target Metadata, bound):
table = Table(table_name, smeta, autoload=True)
target_name = prefix + str(table.name)
target_table = table.tometadata(dmeta, name=target_name)
for constraint in target_table.constraints:
constraint.name = None
target_table.metadata.create_all(dengine)
It fails with this error:
sqlalchemy.exc.DatabaseError: (cx_Oracle.DatabaseError)
ORA-00955: name is already used by an existing object
[SQL: b'CREATE UNIQUE INDEX sys_c009016 ON "TMP_MY_TABLE" (label)']
This is because SQLAlchemy is trying to create the Unique index after creating the table, when it's already too late: CREATE INDEX requires a name, so SA uses the same name as the existing one, and it fails.
I tried setting the index name to None before creation, to give SA a hint, but that results in errors because it expects a string there at all times.
Is there any way to tell SA to just append the bloody UNIQUE clause to the table DDL right away?
"UNIQUE INDEX" means that the Index construct is used. Its DDL is not emitted within the CREATE TABLE. It sounds like you are looking for a UniqueConstraint construct. It seems likely that in this case, Oracle returns reflected information about what you first created as a UniqueConstraint object as an Index object with unique=True (these constructs are "different", but on many backends they are synonymous and/or mixed and matched and sometimes even mirrored, it's totally confusing).
at the end of the day if you want the UNIQUE keyword as an inline constraint you need to use the UniqueConstraint object, and you'd need to remove this Index from the table - you might be able to get away with table.indexes.remove(index). The Index object wouldn't be in table.constraints. You probably want to do your "copy" of the table in a more programmatic way rather than using tometadata(). Look perhaps into using the inspection interface directly and just build the Table you want from that.

Set SQLAlchemy to use PostgreSQL SERIAL for identity generation

Background:
The application I am currently developing is in transition from SQLite3 to PostgreSQL. All the data has been successfully migrated, using the .dump from the current database, changing all the tables of the type
CREATE TABLE foo (
id INTEGER NOT NULL,
bar INTEGER,
...
PRIMARY KEY (id),
FOREIGN KEY(bar) REFERENCES foobar (id),
...
);
to
CREATE TABLE foo (
id SERIAL NOT NULL,
bar INTEGER,
...
PRIMARY KEY (id),
FOREIGN KEY(bar) REFERENCES foobar (id) DEFERRABLE,
...
);
and SET CONSTRAINTS ALL DEFERRED;.
Since I am using SQLAlchemy I was expecting things to work smoothly from then on, after of course changing the engine. But the problem seems to be with the autoincrement of the primary key to a unique value on INSERT.
The table, say foo, I am currently having trouble with has 7500+ rows but the sequence foo_id_seq's current value is set on 5(because I have tried the inserts five times now all of which have failed).
Question:
So now my question is that without explicitly supplying the id, in the INSERT statement, how can I make Postgres automatically assign a unique value to the id field if foo? Or more specifically, have the sequence return a unique value for it?
Sugar:
Achieve all that through the SQLAlchemy interface.
Environment details:
Python 2.6
SQLAlchemy 8.2
PostgreSQL 9.2
psycopg2 - 2.5.1 (dt dec pq3 ext)
PS: If anybody finds a more appropriate title for this question please edit it.
Your PRIMARY KEY should be defined to use a SEQUENCE as a DEFAULT, either via the SERIAL convenience pseudo-type:
CREATE TABLE blah (
id serial primary key,
...
);
or an explicit SEQUENCE:
CREATE SEQUENCE blah_id_seq;
CREATE TABLE blah (
id integer primary key default nextval('blah_id_seq'),
...
);
ALTER SEQUENCE blah_id_seq OWNED BY blah.id;
This is discussed in the SQLAlchemy documentation.
You can add this to an existing table:
CREATE SEQUENCE blah_id_seq OWNED BY blah.id;
ALTER TABLE blah ALTER COLUMN id SET DEFAULT nextval('blah_id_seq');
if you prefer to restore a dump then add sequences manually.
If there's existing data you've loaded directly into the tables with COPY or similar, you need to set the sequence starting point:
SELECT setval('blah_id_seq', max(id)+1) FROM blah;
I'd say the issue is likely to be to do with your developing in SQLite, then doing a dump and restoring that dump to PostgreSQL. SQLAlchemy expects to create the schema its self with the appropriate defaults and sequences.
What I recommend you do instead is to get SQLAlchemy to create a new, empty database. Dump the data for each table from the SQLite DB to CSV, then COPY that data into the PostgreSQL tables. Finally, update the sequences with setval so they generate the appropriate values.
One way or the other, you will need to make sure that the appropriate sequences are created. You can do it by SERIAL pseudo-column types, or by manual SEQUENCE creation and DEFAULT setting, but you must do it. Otherwise there's no way to assign a generated ID to the table in an efficient, concurrency-safe way.
Use
alter sequence foo_id_seq restart with 7600
should give you 7601 next time you call the sequence.
http://www.postgresql.org/docs/current/static/sql-altersequence.html
And then subsequent values. Just make sure that you restart it with a value > the last id.

Categories

Resources