I general I know how to execute custom SQL on syncdb. I'm using Django 1.7 which has migrate, but due to some special fields my app is not yet ready for it, so Django falls back to syncdb. At least now it is using sqlparse in order to properly split the contents of the file into single statements.
However, with sqlparse installed, Django sends the whole file at once and I get
Failed to install custom SQL for myApp.myModel model: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELIMITER //\n\nDROP PROCEDURE IF EXISTS myProcedure //\n\nCREA' at line 1")
When I uninstall sqlparse, Django does not respect the DELIMITER statement and thus produces chunks of codes that don't make sense at all.
I tried the trick with putting a "-- comment" behind any line with a semicolon does not work, comments are removed.
As mentioned in comment of here,
The delimiter is used only by mysql client (not on API, driver ... etc).
So, it won't work.
I encountered similar problem when using db.execute(sql_file) using south (for my django migration) where my sql_file had
DROP PROCEDURE IF EXIATS ....
DELIMITER //
CREATE PROCEDURE ....
BEGIN
....
END
...
I removed the DROP and DELIMITER bits, and added 'label' to the procedure creation and it works for me:
CREATE PROCEDURE xxxx
label: BEGIN
....
;
END label
Related
I am trying to automate a complete application schema rebuild in Django. Basically, drop all the old tables, delete the migration and cache file(s), delete the migration history, and then makemigrations and migrate. I understand this will have some scratching their heads but I like to configuration control my test data and load what I need from csv files and "starting from scratch" is much simpler with this model.
I'm hopeful the automation will work against all the different Django-supported databases so trying to come up with an approach that's generic (although some gaps and brute-force).
As noted above, the first step is to drop all the existing tables. Under the assumption that most model changes aren't deletes, I'm doing some class introspection to identify all the models and then attempting to discern if the related tables exist:
app_models = get_all_app_models(models.Model, app_label)
# Loop through and remove models that no longer exist by doing a count, which should
# work no matter the SQL DBMS.
with connection.cursor() as cursor:
for app_model in app_models[:]:
try:
cursor.execute (f'select count (*) from {app_model._meta.db_table}')
except OperationalError:
app_models.remove(app_model)
<lots more code, doing cursor and other stuff that works OK>
Once the above completes, app_models contains tables that remain and the code then works through dropping them (which itself isn't trivial in a generic way).
The processing is contained in a Django view/form and once complete it attempts to render a simple "okey dokey" page. The problem is that an exception is thrown during the render saying "no such table: xxx" and it refers to the execute statement. There is no other call stack context displayed.
The table mentioned was indeed referenced in the execute, in fact it was the last one in the original app_models. Somehow it seems that Django is retaining error information after the cursor is closed (the 'with' cleanup) and somehow executing against it during the render processing. I tried to execute a "good" SQL statement to "clear" the error, but no luck. I tried to "del" the cursor, also no luck.
Update 3/25: After many hours of trial and error, I have found that if I place the above code (along with the rest of the processing) in a separate function and call it from the view/form function I no longer get the spurious call-back to the execute statement. I researched SQLite quite a bit and it seems that OperationalError is handled differently than (say) DataError. I tried other approaches to clearing the error (e.g. closing the connection) without avail. I suspect the web server is doing something tricky with the stack and is treating the OperationalError incorrectly and the function nesting "hides" it. Cheers.
I am attempting to execute a raw sql insert statement in Sqlalchemy, SQL Alchemy throws no errors when the constructed insert statement is executed but the lines do not appear in the database.
As far as I can tell, it isn't a syntax error (see no 2), it isn't an engine error as the ORM can execute an equivalent write properly (see no 1), it's finding the table it's supposed to write too (see no 3). I think it's a problem with a transaction not being commited and have attempted to address this (see no 4) but this hasn't solved the issue. Is it possible to create a nested transaction and what would start the 'first' so to speak?
Thankyou for any answers.
Some background:
I know that the ORM facilitates this and have used this feature and it works, but is too slow for our application. We decided to try using raw sql for this particular write function due to how often it's called and the ORM for everything else. An equivalent method using the ORM works perfectly, and the same engine is used for both, so it can't be an engine problem right?
I've issued an example of the SQL that the method using raw sql constructs to the database directly and that reads in fine, so I don't think it's a syntax error.
it's communicating with the database properly and can find the table as any syntax errors with table and column names throw a programmatic error so it's not just throwing stuff into the 'void' so to speak.
My first thought after reading around was that it was transaction error and that a transaction was being created and not closed, and so constructed the execute statement as such to ensure a transaction was properly created and commited.
with self.Engine.connect() as connection:
connection.execute(Insert_Statement)
connection.commit
The so called 'Insert Statement' has been converted to text using the sqlalchemy 'text' function, I don't quite understand why it won't execute if I pass the constructed string directly to the execute statement but mention it in case it's relevant.
Other things that may be relevant:
Python3 is running on an individual ec2 instance the postgres database on another. The table in particular is a timescaledb hypertable taking realtime data, hence the need for very fast writes, but probably not relevant.
Currently using pg8000 as dialect for no particular reason other than psycopg2 was throwing errors when trying the execute an equivalent method using the ORM.
Just so this question is answered in case anyone else ends up here:
The issue was a failure to call commit as a method, as #snakecharmerb pointed out. Gord Thompson also provided an alternate method using 'begin' which automatically commits rather than connection which is a 'commit as you go' style transaction.
I'm going to post a question and then post the resolution to the problem, I've found the solution after being stuck for some time so I thought it might be valuable for other people using the command inspectdb to generate models in Django from a MySQL database, using MySQL Connector for Python 3.3
command:
C:\myproj\myproj> manage.py inspectdb --database=mydb > models.py
Resulting Error (Traceback omitted for brevity):
django.db.utils.IntegrityError: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''decimal' at line 1
So the solution is very quick, please note that the credits go to Marcin Miklaszewski who posted the bug and the related solution here: http://bugs.mysql.com/bug.php?id=72478
However I thought that it would have been easier to find here on StackOverflow.
There is an error in line 65 of the file (Assuming you have python installed in C:\Python33):
C:\Python33\lib\site-packages\mysql\connector\django\introspection.py
replace the line 65:
"table_schema = DATABASE() AND data_type='decimal", [table_name])"
with:
"table_schema = DATABASE() AND data_type='decimal'", [table_name])"
(note the missing apostrophe after the word decimal in the first version).
Now your inspectdb command will run correctly.
I dropped my database that I had previously created for django using :
dropdb <database>
but when I go to the psql prompt and say \d, I still see the relations there :
How do I remove everything from postgres so that I can do everything from scratch ?
Most likely somewhere along the line, you created your objects in the template1 database (or in older versions the postgres database) and every time you create a new db i thas all those objects in it. You can either drop the template1 / postgres database and recreate it or connect to it and drop all those objects by hand.
Chances are that you never created the tables in the correct schema in the first place. Either that or your dropdb failed to complete.
Try to drop the database again and see what it says. If that appears to work then go in to postgres and type \l, putting the output here.
I'm trying to save a stripe (the billing service) company id [around 200 characters or so] to my database in Django.
The specific error is:
database error: value too long for type character varying(4)
How can I enable Django to allow for longer values?
I saw:
value too long for type character varying(N)
and:
Django fixture fails, stating "DatabaseError: value too long for type character varying(50)"
my database is already encoded for UTF-8, according to my webhost.
EDIT : I see that one answer recommends making the column wider. Does that involve modifying the PostgreSQL database?
My specific system is Webfaction, CentOs shared machine, Django running on PostgreSQL. I would really appreciate a conceptual overview of what's going on and how I can fix it.
Yes, make the column wider. The error message is quite clear: your 200 characters are too big to fit in a varchar(4).
First, update your model fields max_length attribute from 4 to a number that you expect will be long enough to contain the data you're feeding it.
Next up you have to update the database column itself as django will not automatically update existing columns.
Here are a few options:
1:
Drop the database and run syncdb again. Warning: you will lose all your data.
2: Manually update the column via SQL:
Type in python manage.py dbshell to get into your database shell and type in
ALTER TABLE my_table ALTER COLUMN my_column TYPE VARCHAR(200)
3: Learn and use a database migration tool like django south which will help keep your database updated with your model code.
Using Django 1.11 and Postgres 9.6 I ran into this, for no apparent reason.
I set max_length=255 in the migration file and executed:
manage.py migrate
Then I set the correct length on the Model's max_length and ran another makemigrations and the ran migrate again.
JUST IN CASE SOMEONE RUNS INTO THIS ERROR FOR DJANGO 3 and POSTGRES.
STEP 1: Go to your migration folder.
STEP 2: Navigate to your recent migration file.
STEP 3: Check for the operations list in the migration file.
STEP 4: Update the field or the table column max_length to the higher number to accommodate your data.