My tables are specified as models in Python, which are then made into MySQL tables by Django. In Python:
class my_table(models.Model):
is_correct = models.IntegerField(default = 0)
If I do an insert into this table then I want it to automatically insert 0 for the column is_correct unless I specify a different value. I also want this to happen if I am using raw sql or if I am inserting from a MySQL stored procedure.
The default argument only seems to work from within Python. It's not translated into anything that MySQL sees. Is such a thing possible?
Yes, default argument works on django level. If you want to move it to mysql, perform ALTER TABLE ... query manually.
You also can try to extend models.IntegerField and override db_type. But it should be done for every field...
Related
I'm rebuilding a personal project in Django, (a family tree), and I'm working on migrating the actual data from the old awkward database to my new model/schema, both Postgres databases. I've defined them in the DATABASES list on settings.py as 'default' and 'source'.
I've made my models and I'd like to copy the records from the old database into their corresponding table in the new database, but I'm not quite understanding how to set up the models for it to work, since the Django code uses the models/ORM to access/update/create objects, and I only have models reflecting the new schema, not the old ones.
In a coincidental case where I have a table with the exact same schema in the old and new database, I have a management command that can grab the old records from the source using my new ImagePerson model (ported_data = ImagePerson.objects.using('source').all()), since it the expected fields are the same. Then I save objects for them in the 'default': (obj, created_bool) = ImagePerson.objects.using('default').get_or_create(field=fieldvalue, etc), and it works just like I need it to.
However when I have a table where the old version is missing fields that my new model/table have, I can't use the model to access those records (which makes sense). Am I supposed to also make some kind of legacy version of each model for use in the migration? I saw a tutorial mention running ./manage.py inspectdb --database=source > models.py, but doing so didn't seem to add anything else to my file (and it would seem weird to save temporary/legacy models in there anyway). What's the right way to access the old-formatted records? Is the ORM right?
To give a specific example, I have a Notes table to hold a memory about a specific person or about a specific family. The old table used a 'type' field (1 was for person note, 2 was for family note), and a ref_id that would be the id for the person or family the note applies to. The new table instead has a person_id field and a family_id field.
I'd like my management command to be able to pull all the records from the source table, then if type=1, look up the person with id equal to the ref_id field, and save a new object in the new database with that person. I can grab them using the new Notes model with the old database like this: ported_notes = Note.objects.using('source').all(), but then if I try to access any field (like print(note_row.body)), I get an error that the result object is missing the person_id field
django.db.utils.ProgrammingError: column notes.person_id does not exist
What's the right way to approach this?
Creating models for your old schema definitely doesn't seem like the right approach.
One solution would be to write a data-migration, where you could use raw SQL to fetch your old data, and then use the ORW to write it to your new tables/models:
from django.db import migrations, connections
def transfer_data(apps, schema_editor):
ModelForNewDB = apps.get_model('yourappname', 'ModelForNewDB')
# Fetch your old data
with connections['my_old_db'].cursor() as cursor:
cursor.execute('select * from some_table')
data = cursor.fetchall()
# Write it to your new models
for datum in data:
# do something with the data / add any
# additional values needed.
ModelForNewDB.objects.create(...)
class Migration(migrations.Migration):
dependencies = [
('yourappname', '0001_initial'),
]
operations = [
migrations.RunPython(transfer_data),
]
Then simply run your migrations. One thing to note however:
If you have foreignKeys etc. between tables you will need to be careful how you order the migrations. This can be done by editing your dependencies. You may even have to add a migration to allow null values for some foreign keys, and then add another one afterwards to correct this.
Django is proving the model field argument default (https://docs.djangoproject.com/en/dev/ref/models/fields/#default) but as I know it will be called every time new object is created via django.
If we insert/create a records with raw queries (using django.db.connection.cursor) we will get the exception cause Field 'xyz' doesn't have a default value.
How to represent the db level default value for the column in model. Like db_index.
I hope you guy understand my question.
There is an open ticket 470 to include default values in the SQL schema.
Until this feature has been added to Django, you'll have to manually run alter table statements yourself or write a migration to run them if you want a default value in the SQL schema.
Note that Django allows callables as defaults, so even if this feature is added to Django, it won't be possible to have all defaults in the database.
I'm trying to add a new entry by using the admin panel in Django
The problem is that I've already populated my DB with 200 records and if I try to add a new entry from admin I get a duplicated key error msg that keep increasing whenever I try the process again
error:
duplicate key value violates unique constraint "app_entry_pkey"
admin.py:
admin.site.register(Entry)
model:
class Entry(models.Model):
title = models.CharField(max_length=255)
url = models.TextField(max_length=255)
img = models.CharField(max_length=255)
def __unicode__(self):
return self.title
If you created the database table using Django, then most likely your auto_increment value was not updated when you imported the data outside of Django.
It may also be that when you imported the data you did not give the 200 records each their own unique primary key. I think that (some versions of) SQLite will sometimes allow that in mass imports.
MySQL
For example, I’m looking at a MySQL table in Sequel Pro and see that it has an “auto_increment” value of 144. This means that the next primary key value will be 144.
You can see this value for your table (in MySQL) using:
SHOW TABLE STATUS FROM databaseName where name="entry"
Replacing “databaseName” with the name of your Django database. Other database software will likely have different syntax.
You can set the next auto_increment value (in MySQL) using:
ALTER TABLE databaseName.entry AUTO_INCREMENT ###
Again replacing databaseName with the name of your database; and as before, the syntax may vary depending on the database software you’re using.
If this doesn’t help, you may find it useful to show the table’s status and copy that into your question. This might also be useful in tracking down the issue:
SHOW CREATE TABLE databaseName.entry
Postgres
In Postgres, you can get the current value of the auto increment variable (called sequences in Postgres) using something like:
SELECT last_value FROM app_entry_pkey;
And you will likely set it to a new value with something like:
ALTER SEQUENCE app_entry_pkey RESTART WITH ###
or
SELECT setval('app_entry_pkey', ###)
Note, though, that I do not have a Postgres database handy to test these on. You may also find the following commands useful:
SELECT MAX(id) FROM entry
SELECT nextval('app_entry_pkey')
The latter should generally be larger than the former, and note that “id” is the name of the column in your “entry” model’s table; it may be different in your table. See http://www.postgresql.org/docs/8.1/static/functions-sequence.html for more information.
We're trying to enable a SQL query front-end to our Web application, which is WSGI and uses Python, with SQLAlchemy (core, not ORM) to query a PostgreSQL database. We have several data layer functions set up to assist in query construction, and we are now trying to set something up that allows this type of query:
select id from <table_name> where ... limit ...
In the front end, we have a text box which lets the user type in the where clause and the limit clause, so that the data can be queried flexibly and dynamically from the front end, that is, we want to enable ad hoc querying. So, the only thing that we now firsthand is:
select id from <table_name>
And the user will type in, for example:
where date > <some_date>
where location is not null limit 10 order by location desc
using the same back end function. The select, column and table should be managed by the data layer (i.e. it knows what they are, and the user should not need to know that). However, I'm not aware of any way to get SQLAlchemy to automatically parse both the where clause and the limit clause automatically. What we have right now is a function which can return the table name and the name of the id column, and then use that to create a text query, which is passed to SQLAlchemy, as the input to a text() call.
Is there any way I can do this with SQLAlchemy, or some other library? Or is there a better pattern of which I should be aware, which does not involve parsing the SQL while still allowing this functionality from the front-end?
Thanks a lot! All suggestions will be greatly appreciated.
I'm not sure I follow, but the general SQL-Alchemy usage is like:
results = db.session.query(User).filter(User.name == "Bob").order_by(User.age.desc()).limit(10)
That will query the User table to return the top ten oldest members named "Bob"
I added a url column in my table, and now sqlalchemy is saying 'unknown column url'.
Why isn't it updating the table?
There must be a setting when I create the session?
I am doing:
Session = sessionmaker(bind=engine)
Is there something I am missing?
I want it to update any table that doesn't have a property that I added to my Table structure in my python code.
I'm not sure SQLAlchemy supports schema migration that well (atleast the last time I touched it, it wasn't there).
A couple of options.
Don't manually specify your tables. Use the autoload feature to have SQLAlchemy automatically read out the columns from your database. This will require tests to make sure that it works but you get the idea in generally. DRY.
Try SQLAlchemy migrate.
Manually update the table after you change the model specification.
In cases when you just add new columns, you can safely use metadata.create_all(bind=engine) to update your schema from your class definitions.
However this will NOT modify existing columns or remove columns from DB schema if you remove them in SQLA definition.