Hello I get the following error on certain stored procedures. "Use multi=True when executing multiple statements" We're using python 3.7 , flask and mysql-connector. I encountered this problem on a stored procedure previously and was able to get rid of it by changing a select statement like the following select column1, column2 from table to a select *. I've also run into this same error with another stored procedure that is using a cross join but have been unable to solve it.
Now these stored procedures run fine from workbench but they have issues when calling the api which uses mysql-connector callproc method. They were also working fine in the API back when i wrote the API code. This problem seems to have appeared recently. Below is a generic code snippet.
cursor = cnxn.cursor();
cursor.callproc('storedprocedure', args) #error line
I've tried setting multi=True for the cursor but that doesn't do anything. Was wondering if anybody had any ideas on what might cause this issue.
Related
I am attempting to execute a raw sql insert statement in Sqlalchemy, SQL Alchemy throws no errors when the constructed insert statement is executed but the lines do not appear in the database.
As far as I can tell, it isn't a syntax error (see no 2), it isn't an engine error as the ORM can execute an equivalent write properly (see no 1), it's finding the table it's supposed to write too (see no 3). I think it's a problem with a transaction not being commited and have attempted to address this (see no 4) but this hasn't solved the issue. Is it possible to create a nested transaction and what would start the 'first' so to speak?
Thankyou for any answers.
Some background:
I know that the ORM facilitates this and have used this feature and it works, but is too slow for our application. We decided to try using raw sql for this particular write function due to how often it's called and the ORM for everything else. An equivalent method using the ORM works perfectly, and the same engine is used for both, so it can't be an engine problem right?
I've issued an example of the SQL that the method using raw sql constructs to the database directly and that reads in fine, so I don't think it's a syntax error.
it's communicating with the database properly and can find the table as any syntax errors with table and column names throw a programmatic error so it's not just throwing stuff into the 'void' so to speak.
My first thought after reading around was that it was transaction error and that a transaction was being created and not closed, and so constructed the execute statement as such to ensure a transaction was properly created and commited.
with self.Engine.connect() as connection:
connection.execute(Insert_Statement)
connection.commit
The so called 'Insert Statement' has been converted to text using the sqlalchemy 'text' function, I don't quite understand why it won't execute if I pass the constructed string directly to the execute statement but mention it in case it's relevant.
Other things that may be relevant:
Python3 is running on an individual ec2 instance the postgres database on another. The table in particular is a timescaledb hypertable taking realtime data, hence the need for very fast writes, but probably not relevant.
Currently using pg8000 as dialect for no particular reason other than psycopg2 was throwing errors when trying the execute an equivalent method using the ORM.
Just so this question is answered in case anyone else ends up here:
The issue was a failure to call commit as a method, as #snakecharmerb pointed out. Gord Thompson also provided an alternate method using 'begin' which automatically commits rather than connection which is a 'commit as you go' style transaction.
I am trying to run a large query set up at work, maybe a stored procedure.?. I am unfamiliar with large queries but i think that is the issue. I can use pyodbc to connect and run a simple "SELECT * FROM db;" and it works fine, and if i run the full query in MSSMS and it works fine, but not when I copy the large query into the query variable
I have found a few articles that say to add "SET NOCOUNT NO;" I tried that and it didn't work either.
the python error is:
"No results. Previous SQL is not a query"
Once again the query works fine in MSSMS so any guidance would be appreciated.
thanks
I've recently migrated a MySQL database from local to Google Cloud Platform in preparation for deployment. When I first did this, I encountered:
MySQLdb._exceptions.OperationalError: (1055, "Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'testing.Event.EventID' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by")
Annoying, but no problem: a quick search revealed it could be turned off in the tags section of the GCP console, and that seemed to be OK as I wasn't too worried about the risks of turning it off. This worked, or so I thought. The same issue continues to appear days after setting the "Traditional" tag on my GCP SQL instance.
Even when I run the query:
SELECT ##sql_mode;
The result I get is:
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
Which does not contain the only_full_group_by setting, and nonetheless I receive the error: this is incompatible with sql_mode=only_full_group_by
Is there some reason that this error would continue to appear despite not being in the setting that the error code says causes the error?
what is difference in working of both. as i am getting result in executing below command.
spark.sql("select * from MetadataTable").show()
but when i am trying to run cursor.execute("select * from MetadataTable"), it is showing me error
"metadatatable" does not exist
what should I do access table "metadatatable" by cursor.execute?
At a glance, spark.sql is the spark way to use SQL to work with your data.
Cursor.execute does not appear to be spark code.
Perhaps it is python code for interaction with a database.
You could share the documentation if this seems wrong, but probably reviewing the documentation should explain what it is.
I am trying to fetch a clob data from Oracle server and the connection is made through ssh tunnel.
When I tried to run the following code:
(id,clob) = cursor.fetchone()
print('one fetched')
clob_data = clob.read()
print(clob_data)
the execution freezes
Can someone help me with what's wrong here because I have referred to cx_oracle docs and the example code is just the same.
It is possible that there is a round trip taking place that is not being handled properly by the cx_Oracle driver. Please create an issue here (https://github.com/oracle/python-cx_Oracle/issues) with a few more details such as platform, Python version, Oracle database/client version, etc.
You can probably work around the issue, however, by simply returning the CLOBs as strings as can be seen in this sample: https://github.com/oracle/python-cx_Oracle/blob/master/samples/ReturnLobsAsStrings.py.