I am trying to write some data to a table in a database which I am creating.
However, I am facing with an integrity error like:
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) PRIMARY KEY must be unique
My question is how to avoid these errors as I will run a couple of times the script
Basically you are creating an object with an already existing primary key, and it's not accepted by SQLite. Verify it by querying the db with something like
select * from airport where id = 6256
If the query returns a result, you need to change the id of the airport you are saving. Since you use the autoincrement, you don't need to specify an id and the DBMS will assign the next free id in that table.
Related
I am trying to build a composite primary key for my tabels. They should also have a self incremented id. My problem is that when I use a composite primary key the ID becomes NULL (as seen in the pictures)
here it works as it should but no composite key
here the id is NULL no matter what.
I tried different synatxes and also key words like NOT NULL and AUTOINCREMENT but nothing seems to work.
Here is the code without composite key
mystr = "CREATE TABLE IF NOT EXISTS KM%s(id INTEGER PRIMARY KEY, date TEXT, client INTEGER)"%(month.replace('-',"))
print(mystr)
c.execute(mystr) #create a table
conn.commit()'''
Here is the code with COMPOSITE KEY
mystr = "CREATE TABLE IF NOT EXISTS KM%s(id INTEGER, date TEXT, client INTEGER, primary key (id, client)"%(month.replace('-',"))
print(mystr)
c.execute(mystr) #create a table
conn.commit()
I was sure that I'd used autoincremented integer columns in the past which were not primary keys, but it certainly doesn't work today with SQLite.
I must echo what #forpas has already said in the comment that you just can't do that.
The solution would be to add the UNIQUE constraint to id and generate your ID programmatically as you go. You do not need to track your current maximum ID because you can simply ask SQLite what the max is:
SELECT MAX(id) FROM KM<month>;
Increment that value by 1 and include it in your INSERT INTO statement.
I'd like to offer a couple of tips:
Using two integers as your composite key is a bad idea. Take composite key 1315 for example. Is that client 315 with an ID of 1, client 15 with an ID of 13, or client 5 with an ID of 131? It's true that primary keys are just for searching and do not have to be unique in many cases, but using integers generally does not work well.
The second tip is not to create a new database table for each month. A very good rule is that identically-structured tables should be combined into a single table. In this case you would add a column called month (actually, it would be 'date' then you would search by month) and keep everything in one table, not one table per month.
I am creating a database from different CSV files. After doing this I have tried to define the primary key table by table but I got an error.
c.execute("ALTER TABLE patient_data ADD PRIMARY KEY (ID);").fetchall()
OperationalError: near "PRIMARY": syntax error
Maybe the best thing to avoid this error is to define the primary key when the table is create but I dont know how to do that. I have been working with python for a few years but today is my first approach with SQL.
This is the code I use to import a CSV to a table
c.execute('''DROP TABLE IF EXISTS patient_data''')
c.execute(''' CREATE TABLE patient_data (ID, NHS_Number,Full_Name,Gender, Birthdate, Ethnicity, Postcode)''')
patients_admitted.to_sql('patient_data', conn, if_exists='append', index = False)
c.execute('''SELECT * FROM patient_data''').fetchall()
This is too long for a comment.
If your table does not have data, just re-create it with the primary key definition.
If your table does have data, you cannot add a primary key in one statement. Why not? The default value is either NULL or constant. And neither is allowed as a primary key.
And finally, SQLite does not allow you to add a primary key to an existing table. The solution is to copy the data to another table, recreate the table with the structure you want, and then copy the data back in.
I Have a project built in django and it uses a postgres database.
This database was populated by CSVs files. So when I want to insert a new object I got the error "duplicated key" because the object with id = 1 already exists.
The code :
user = User(name= "Foo")
user.save()
The table users has the PK on the id.
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
If I get the table's details in psql I got:
Column| Type | Modifiers
------+-------- +--------------------------------------
id | integer | not null default nextval('users_id_seq'::regclass)
Additionally, if I do user.dict after create the variable user and before saving it, I get 'id': None
How can I save the user with an id that is not being used?
You most likely inserted your Users from the CSV setting the id value explicitly, when this happens the postgres sequence is not updated and as a result of that when you try to add a new user the sequence generates an already used value
Check this other question for reference postgres autoincrement not updated on explicit id inserts
The solution is what the answer for that question says, update your sequence manually
You can fix it by setting users_id_seq manually.
SELECT setval('users_id_seq', (SELECT MAX(id) from "users"));
Unless you have name as a primary key for the table the above insert should work. If you have name as primary key remove it and try it.
In Postgres SQL you can specify id as serial and you can mark it as Primary Key.Then whenever you will insert record , it will be in a sequence.
i.e id serial NOT NULL and
CONSTRAINT primkey PRIMARY KEY (id).
As you said its a pre populated by CSV , so when you insert it from python code it will automatically go the end of the table and there will be no duplicate values.
I'm working with sqlite3 on python 2.7 and I am facing a problem with a many-to-many relationship. I have a table from which I am fetching its primary key like this
current.execute("SELECT ExtensionID FROM tblExtensionLookup where ExtensionName = ?",[ext])
and then i am fetching another primary key from another table
current.execute("SELECT HostID FROM tblHostLookup where HostName = ?",[host])
now what i am doing is i have a third table with these two keys as foreign keys and i inserted them like this
current.execute("INSERT INTO tblExtensionHistory VALUES(?,?)",[Hid,Eid])
The problem is i don't know why but the last insertion is not working it keeps giving errors. Now what i have tried is:
First I thought it was because I have an autoincrement primary id for the last mapping table which I didn't provide, but isn't it supposed to consider itself as it's auto incremented? However I went ahead and tried adding Null,None,0 but nothing works.
Secondly I thought maybe because i'm not getting the values from tables above so I tried printing it out and it shows so it works.
Any suggestions what I am doing wrong here?
EDIT :
When i don't provide primary key i get error as
The table has three columns but you provided only two values
and when i do provide them as None,Null or 0 it says
Parameter 0 is not supported probably because of unsupported type
I tried implementing the #abarnet way but still keeps saying parameter 0 not supported
connection = sqlite3.connect('WebInfrastructureScan.db')
with connection:
current = connection.cursor()
current.execute("SELECT ExtensionID FROM tblExtensionLookup where ExtensionName = ?",[ext])
Eid = current.fetchone()
print Eid
current.execute("SELECT HostID FROM tblHostLookup where HostName = ?",[host])
Hid = current.fetchone()
print Hid
current.execute("INSERT INTO tblExtensionHistory(HostID,ExtensionID) VALUES(?,?)",[Hid,Eid])
EDIT 2 :
The database schema is :
table 1:
CREATE TABLE tblHostLookup (
HostID INTEGER PRIMARY KEY AUTOINCREMENT,
HostName TEXT);
table2:
CREATE TABLE tblExtensionLookup (
ExtensionID INTEGER PRIMARY KEY AUTOINCREMENT,
ExtensionName TEXT);
table3:
CREATE TABLE tblExtensionHistory (
ExtensionHistoryID INTEGER PRIMARY KEY AUTOINCREMENT,
HostID INTEGER,
FOREIGN KEY(HostID) REFERENCES tblHostLookup(HostID),
ExtensionID INTEGER,
FOREIGN KEY(ExtensionID) REFERENCES tblExtensionLookup(ExtensionID));
It's hard to be sure without full details, but I think I can guess the problem.
If you use the INSERT statement without column names, the values must exactly match the columns as given in the schema. You can't skip over any of them.*
The right way to fix this is to just use the column names in your INSERT statement. Something like:
current.execute("INSERT INTO tblExtensionHistory (HostID, ExtensionID) VALUES (?,?)",
[Hid, Eid])
Now you can skip any columns you want (as long as they're autoincrement, nullable, or otherwise skippable, of course), or provide them in any order you want.
For your second problem, you're trying to pass in rows as if they were single values. You can't do that. From your code:
Eid = current.fetchone()
This will return something like:
[3]
And then you try to bind that to the ExtensionID column, which gives you an error.
In the future, you may want to try to write and debug the SQL statements in the sqlite3 command-line tool and/or your favorite GUI database manager (there's a simple extension that runs in for Firefox if you don't want anything fancy) and get them right, before you try getting the Python right.
* This is not true with all databases. For example, in MSJET/Access, you must skip over autoincrement columns. See the SQLite documentation for how SQLite interprets INSERT with no column names, or similar documentation for other databases.
I am trying to add an 'id' primary key column to an already existing MySQL table using alembic. I tried the following...
op.add_column('mytable', sa.Column('id', sa.Integer(), nullable=False))
op.alter_column('mytable', 'id', autoincrement=True, existing_type=sa.Integer(), existing_server_default=False, existing_nullable=False)
but got the following error
sqlalchemy.exc.OperationalError: (OperationalError) (1075, 'Incorrect table definition; there can be only one auto column and it must be defined as a key') 'ALTER TABLE mytable CHANGE id id INTEGER NOT NULL AUTO_INCREMENT' ()
looks like the sql statement generated by alembic did not add PRIMARY KEY at the end of the alter statement. Could I have missed some settings?
Thanks in advance!
I spent some time digging through the alembic source code, and this doesn't seem to be supported. You can specify primary keys when creating a table, but not when adding columns. In fact, it specifically checks and won't let you (link to source):
# from alembic.operations.toimpl.add_column, line 132
for constraint in t.constraints:
if not isinstance(constraint, sa_schema.PrimaryKeyConstraint):
operations.impl.add_constraint(constraint)
I looked around, and adding a primary key to an existing table may result in unspecified behavior - primary keys aren't supposed to be null, so your engine may or may not create primary keys for existing rows. See this SO discussion for more info: Insert auto increment primary key to existing table
I'd just run the alter query directly, and create primary keys if you need to.
op.execute("ALTER TABLE mytable ADD id INT PRIMARY KEY AUTO_INCREMENT;")
If you really need cross-engine compatibility, the big hammer would be to (1) create a new table identical to the old one with a primary key, (2) migrate all your data, (3)delete the old table and (4) rename the new table.
Hope that helps.
You have to remove the primary key that is in the table and then create a new one that includes all columns that you want as the primary key.
eg. In psql use \d <table name> to define the schema, then check the primary key constraint.
Indexes:
"enrollments_pkey" PRIMARY KEY, btree (se_crs_id, se_std_id)
then use this information in alembic
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('enrollments', sa.Column(
'se_semester', sa.String(length=30), nullable=False))
op.drop_constraint('enrollments_pkey', 'enrollments', type_='primary')
op.create_primary_key('enrollments_pkey', 'enrollments', [
'se_std_id', 'se_crs_id', 'se_semester'])
The results after running \d enrollments should be updated to
Indexes:
"enrollments_pkey" PRIMARY KEY, btree (se_std_id, se_crs_id, se_semester)
This solution worked fine for me.