I'm trying to add a new entry by using the admin panel in Django
The problem is that I've already populated my DB with 200 records and if I try to add a new entry from admin I get a duplicated key error msg that keep increasing whenever I try the process again
error:
duplicate key value violates unique constraint "app_entry_pkey"
admin.py:
admin.site.register(Entry)
model:
class Entry(models.Model):
title = models.CharField(max_length=255)
url = models.TextField(max_length=255)
img = models.CharField(max_length=255)
def __unicode__(self):
return self.title
If you created the database table using Django, then most likely your auto_increment value was not updated when you imported the data outside of Django.
It may also be that when you imported the data you did not give the 200 records each their own unique primary key. I think that (some versions of) SQLite will sometimes allow that in mass imports.
MySQL
For example, I’m looking at a MySQL table in Sequel Pro and see that it has an “auto_increment” value of 144. This means that the next primary key value will be 144.
You can see this value for your table (in MySQL) using:
SHOW TABLE STATUS FROM databaseName where name="entry"
Replacing “databaseName” with the name of your Django database. Other database software will likely have different syntax.
You can set the next auto_increment value (in MySQL) using:
ALTER TABLE databaseName.entry AUTO_INCREMENT ###
Again replacing databaseName with the name of your database; and as before, the syntax may vary depending on the database software you’re using.
If this doesn’t help, you may find it useful to show the table’s status and copy that into your question. This might also be useful in tracking down the issue:
SHOW CREATE TABLE databaseName.entry
Postgres
In Postgres, you can get the current value of the auto increment variable (called sequences in Postgres) using something like:
SELECT last_value FROM app_entry_pkey;
And you will likely set it to a new value with something like:
ALTER SEQUENCE app_entry_pkey RESTART WITH ###
or
SELECT setval('app_entry_pkey', ###)
Note, though, that I do not have a Postgres database handy to test these on. You may also find the following commands useful:
SELECT MAX(id) FROM entry
SELECT nextval('app_entry_pkey')
The latter should generally be larger than the former, and note that “id” is the name of the column in your “entry” model’s table; it may be different in your table. See http://www.postgresql.org/docs/8.1/static/functions-sequence.html for more information.
Related
I'm rebuilding a personal project in Django, (a family tree), and I'm working on migrating the actual data from the old awkward database to my new model/schema, both Postgres databases. I've defined them in the DATABASES list on settings.py as 'default' and 'source'.
I've made my models and I'd like to copy the records from the old database into their corresponding table in the new database, but I'm not quite understanding how to set up the models for it to work, since the Django code uses the models/ORM to access/update/create objects, and I only have models reflecting the new schema, not the old ones.
In a coincidental case where I have a table with the exact same schema in the old and new database, I have a management command that can grab the old records from the source using my new ImagePerson model (ported_data = ImagePerson.objects.using('source').all()), since it the expected fields are the same. Then I save objects for them in the 'default': (obj, created_bool) = ImagePerson.objects.using('default').get_or_create(field=fieldvalue, etc), and it works just like I need it to.
However when I have a table where the old version is missing fields that my new model/table have, I can't use the model to access those records (which makes sense). Am I supposed to also make some kind of legacy version of each model for use in the migration? I saw a tutorial mention running ./manage.py inspectdb --database=source > models.py, but doing so didn't seem to add anything else to my file (and it would seem weird to save temporary/legacy models in there anyway). What's the right way to access the old-formatted records? Is the ORM right?
To give a specific example, I have a Notes table to hold a memory about a specific person or about a specific family. The old table used a 'type' field (1 was for person note, 2 was for family note), and a ref_id that would be the id for the person or family the note applies to. The new table instead has a person_id field and a family_id field.
I'd like my management command to be able to pull all the records from the source table, then if type=1, look up the person with id equal to the ref_id field, and save a new object in the new database with that person. I can grab them using the new Notes model with the old database like this: ported_notes = Note.objects.using('source').all(), but then if I try to access any field (like print(note_row.body)), I get an error that the result object is missing the person_id field
django.db.utils.ProgrammingError: column notes.person_id does not exist
What's the right way to approach this?
Creating models for your old schema definitely doesn't seem like the right approach.
One solution would be to write a data-migration, where you could use raw SQL to fetch your old data, and then use the ORW to write it to your new tables/models:
from django.db import migrations, connections
def transfer_data(apps, schema_editor):
ModelForNewDB = apps.get_model('yourappname', 'ModelForNewDB')
# Fetch your old data
with connections['my_old_db'].cursor() as cursor:
cursor.execute('select * from some_table')
data = cursor.fetchall()
# Write it to your new models
for datum in data:
# do something with the data / add any
# additional values needed.
ModelForNewDB.objects.create(...)
class Migration(migrations.Migration):
dependencies = [
('yourappname', '0001_initial'),
]
operations = [
migrations.RunPython(transfer_data),
]
Then simply run your migrations. One thing to note however:
If you have foreignKeys etc. between tables you will need to be careful how you order the migrations. This can be done by editing your dependencies. You may even have to add a migration to allow null values for some foreign keys, and then add another one afterwards to correct this.
I have been working on an offline version of my Django web app and have frequently deleted model instances for a certain ModelX.
I have done this from the admin page and have experienced no issues. The model only has two fields: name and order and no other relationships to other models.
New instances are given the next available pk which makes sense, and when I have deleted all instances, adding a new instance yields a pk=1, which I expect.
Moving the code online to my actual database I noticed that this is not the case. I needed to change the model instances so I deleted them all but to my surprise the primary keys kept on incrementing without resetting back to 1.
Going into the database using the Django API I have checked and the old instances are gone, but even adding new instances yield a primary key that picks up where the last deleted instance left off, instead of 1.
Wondering if anyone knows what might be the issue here.
I wouldn't call it an issue. This is default behaviour for many database systems. Basically, the auto-increment counter for a table is persistent, and deleting entries does not affect the counter. The actual value of the primary key does not affect performance or anything, it only has aesthetic value (if you ever reach the 2 billion limit you'll most likely have other problems to worry about).
If you really want to reset the counter, you can drop and recreate the table:
python manage.py sqlclear <app_name> > python manage.py dbshell
Or, if you need to keep the data from other tables in the app, you can manually reset the counter:
python manage.py dbshell
mysql> ALTER TABLE <table_name> AUTO_INCREMENT = 1;
The most probable reason you see different behaviour in your offline and online apps, is that the auto-increment value is only stored in memory, not on disk. It is recalculated as MAX(<column>) + 1 each time the database server is restarted. If the table is empty, it will be completely reset on a restart. This is probably very often for your offline environment, and close to none for your online environment.
As others have stated, this is entirely the responsibility of the database.
But you should realize that this is the desirable behaviour. An ID uniquely identifies an entity in your database. As such, it should only ever refer to one row. If that row is subsequently deleted, there's no reason why you should want a new row to re-use that ID: if you did that, you'd create a confusion between the now-deleted entity that used to have that ID, and the newly-created one that's reused it. There's no point in doing this and you should not want to do so.
Did you actually drop them from your database or did you delete them using Django? Django won't change AUTO_INCREMENT for your table just by deleting rows from it, so if you want to reset your primary keys, you might have to go into your db and:
ALTER TABLE <my-table> AUTO_INCREMENT = 1;
(This assumes you're using MySQL or similar).
There is no issue, that's the way databases work. Django doesn't have anything to do with generating ids it just tells the database to insert a row and gets the id in response from database. The id starts at 1 for each table and increments every time you insert a row. Deleting rows doesn't cause the id to go back. You shouldn't usually be concerned with that, all you need to know is that each row has a unique id.
You can of course change the counter that generates the id for your table with a database command and that depends on the specific database system you're using.
If you are using SQLite you can reset the primary key with the following shell commands:
DELETE FROM your_table;
DELETE FROM SQLite_sequence WHERE name='your_table';
Another solution for 'POSTGRES' DBs is from the UI.
Select your table and look for 'sequences' dropdown and select the settings and adjust the sequences that way.
example:
I'm not sure when this was added, but the following management command will delete all data from all tables and will reset the auto increment counters to 1.
./manage.py sqlflush | psql DATABASE_NAME
Can we do a loosely coupled data access layer design in python?
Lets say,in a scenario i have an oracle table with column name ACTIVITY_ID with column datatype as Number(10). If this column is a foreign key in lot many tables,to hold the data of this column, can i create something like ACTID class (like a java object) and can be used across the code if i want to manipulate/hold the ACTIVITY_ID column data so that i could maintain consistency of business object columns. Is there any such possibility in python ?
Try Django
As I understand it, Python does not natively have any database functionality. There are many different libraries/frameworks/etc. that can be used to provide database functionality to Python. I recommend taking a look at Django. With Django, you create a class for each database table and Django hides a LOT of the details, including allowing using with multiple database engines such as MySQL and PostgreSQL. Django handles Foreign Key relationships very well. Every table normally has a primary key, by default an auto-incremented id field. If you add a field like activity = models.ForeignKey(Activity) then you now have a foreign key field activity_id in one table referencing the primary key field id in the Activity table. The Admin page will take care of cascading deletion of records if you delete an Activity record, and in general things "just work" the way you might expect them to.
I have little to no experience with databases and i'm wondering how i would go about storing certain parts of an object.
Let's say I have an object like the following and steps can be an arbitrary length. How would I store these steps or list of steps into an sql database?
class Error:
name = "" #name of error
steps = [] #steps to take to attempt to solve error
For your example you would create a table called Errors with metadata about the error such as an error_ID as the primary key, a name, date created, etc... then you'd create another table called Steps with it's own id, lets say Step_ID and any fields related to the step. The important part is you'd create a field on the Steps table that relates back to the Error that the steps are for we'll call that field again error_ID, then you'd make that field a foreign key so the database enforces that constraint.
If you want to store your Python objects in a database (or any other language objects in a database) the place to start is a good ORM (Object-Relational Mapper). For example Django has a built-in ORM. This link has a comparison of some Python Object-Relational mappers.
My tables are specified as models in Python, which are then made into MySQL tables by Django. In Python:
class my_table(models.Model):
is_correct = models.IntegerField(default = 0)
If I do an insert into this table then I want it to automatically insert 0 for the column is_correct unless I specify a different value. I also want this to happen if I am using raw sql or if I am inserting from a MySQL stored procedure.
The default argument only seems to work from within Python. It's not translated into anything that MySQL sees. Is such a thing possible?
Yes, default argument works on django level. If you want to move it to mysql, perform ALTER TABLE ... query manually.
You also can try to extend models.IntegerField and override db_type. But it should be done for every field...