I am using pymssql for executing ms sql stored procedure from python. When I try to execute a stored procedure it seems not getting executed. The code gets completed without any error but upon verifying I can see the procedure was not really executed. What baffles me is that usual queries like select and similar ones are working. What might be missing here? I have tried the below two ways. The stored procedure does not have any parameters or arguments.
cursor.execute("""exec procedurename""")
and
cursor.callproc('procedurename',())
EDIT: The procedure loads a table with some latest data. When I execute the proc from local, it loads the table with latest data but I can see the latest data is not being loaded when done from python using pymssql.
Thanks to AlwaysLearning to provide the crucial clue for fixing the issue, I have added connection.commit() after the procedure call and it started working!
Related
I have made a stored procedure in MySQL which accepts several arguments and does its things.
And while I have no problem to execute the following query in MySQL
CALL my_pr(var1, var2, var3); CALL my_pr(var4, var5, var6);
When I try to execute it (or any other 2 statements at once) via Python I get the following error:
Commands out of sync; you can't run this command now
But when I am trying to execute them one by one - everything works smoothly.
I am adding each statement to a list and then execute it via:
for stm in sql_stms:
mycursor.execute(stm)
mydb.commit()
Where I set each stm to be a single query or a multiple statement query in some code above. And my sql_stms contain several INSERT, SELECT and DELETE queries and tens (or sometimes hundreds) of queries for a stored procedure.
My goal is to speed up the running process and currently the slowest part in my code is submitting queries to SQL, so I believe that when I submit multiple queries at once it will work slightly faster.
Any ideas and suggestions are welcomed.
Probably not expecting more than one resultSet, try setting the multi result to true before executing
mycursor = mydb.cursor(multi=True)
The interface is not designed to easily get two "result sets" at once.
There is very little advantage in trying to run two statements together. Simply run them one at a time.
You can, on the other hand, build a third SP that makes those two CALLs. But, again, why bother.
I'm after a way of querying Impala through Python which enables you to keep a connection open and pass queries to it.
I can connect quite happily to Impala using this sort of code:
import subprocess
sql = 'some sort of sql statement;'
cmds = ['impala-shell','-k','-B','-i','impala.company.corp','-q', sql]
out,err = subprocess.Popen(cmds, stderr=subprocess.PIPE, stdout=subprocess.PIPE).communicate()
print(out.decode())
print(err.decode())
I can also switch out the -q and sql for -f and a file with sql statements as per the documentation here.
When I'm running this for multiple sql statements the name node it uses is the same for all the queries and it it will stop if there is a failure in the code (unless I use the option to continue), this is all expected.
What I'm trying to get to is where I can run a query or two, check the results using some python logic and then continue if it meets my criteria.
I have tried splitting up my code into individual queries using sqlparse and running them one by one. This works well in isolation but if one statement is a drop table if exists x; and the next one then goes create table x (blah string); then if x did actually exist then because the second statement will run on a different node the dropping metadata change hasn't reached that one yet and it fails with table x already exists or similar error.
I'd think as well as getting round this metadata issue it would just make more sense to keep a connection open to impala whilst I run all the statements but I'm struggling to work this out.
Does anyone have any code that has this functionality?
You may wanna look at impyla, the Impala/Hive python client, if you haven't done so already.
As far as the second part of your question, using Impala's SYNC_DDL option will guarantee that DDL changes are propagated across impalads before next DDL is executed.
I have been working extensively with peewee and postgresql for months. Suddenly this started happening. If I run any query command and get an error, then all subsequent commands start returning peewee.InternalError: current transaction is aborted, commands ignored until end of transaction block .
I thought this behavior started when I upgraded peewee from 3.5.2 to 3.7.2, but I have since downgraded and the behavior continues. This has definitely not always happened.
In the simplest case, I have a database table with exactly one record. I try to create a new record with the same id and I get an IntegrityError as expected. If I then try to run any other query commands on that database, I get the InternalError as above.
This does not happen with an sqlite database.
I have reinstalled peewee and psycopg2, to no avail.
What am I missing?
Try setting autorollback=True in Database class. You can follow docs here
Your problem is already known in this issue
While it's fine to use autorollback, it's much better to explicitly manage your transactions so that where an integrity error might occur you are catching the error and explicitly rolling back. For instance, if you have a user signup page and there's a unique constraint on the username, you might wrap it in a try/except and rollback upon failure.
I have an existing system where an oracle database is populated with metadata by a series of Python files. There are around 500, and the current method of running them one at a time is taking around an hour to complete.
To cut down on this runtime, I've tried threading the individual files, running them in concurrently, but I've been getting a the error
pyodbc.IntegrityError: ('23000', '[23000] [Oracle][ODBC][Ora]ORA-00001: unique constraint (DB.PK_TABLE_NAME) violated\n (1) (SQLExecDirectW)')
with a traceback to the following call:
File "C:\file.py", line 295, in ExecuteSql
cursor.execute(Sql)
Can anyone shed any light on this for me by any chance? This doesn't seem to be happening if a file which has thrown the error is then run individually, which leads me to suspect this is an access issue where two files are trying to write to the DB at once. I hope this is not the case, as that will likely veto this approach entirely.
I eventually realised that the issue was coming from the way that the SQL submitted to the database was being constructed.
The ID for the table was being generated by a "GetNext()" function, which got the current max ID from the table and incremented it by one. This was failing when multiple files were being run - and trying to use the same ID based on this generated - at the same time.
I'm connecting to MySQL with the MySQLdb module. I don't want to use Python's time functions: I want to know how long the query ran within MySQL, i.e. the number I see after I've run a query within MySQL directly.
I do see a thread where this is addressed as something one could eventually dig down to, but I was hoping that since MySQL reports that number, the Python connection would have picked it up somewhere.
May this help?
SET profiling = 1;
Run your query;
SHOW PROFILES;
See here:http://dev.mysql.com/doc/refman/5.7/en/show-profile.html
Because of the above commands will be removed in the future version, Performance Schema can be used http://dev.mysql.com/doc/refman/5.7/en/performance-schema.html and http://dev.mysql.com/doc/refman/5.7/en/performance-schema-query-profiling.html.
On the above links, there are more details on Query Profiling Using Performance Schema.