“Associated statement not prepared” caused by pypyodc? - python

What - Error Message (‘HY007’, [HY007][ODBC SQL Server Driver] Associated statement Is not prepared.
I downloaded ODBC to better diagnose this error based off other posts, however it is still throwing the same error
What is the actually error here and what is the way around it?
Import requests
Import pandas as pd
Import pypyodbc
Import matplotlib.pypot as ply
Conn1 = pypyodbc.connect(“Driver={SQL Server};” “Server = DESKTOP-KOOxxx;” “Database = Horsesxx;” “Trusted_Connection=yes;”, autocommit=True)
Mycursor = Conn1.cursor()
Mycursor.execute(‘Drop table #temptable SELECT * into #temptable FROM (SELECT HorseName, DayCalender FROM horses WHERE Place = 1) AS T1 Inner Join (SELECT runnerName, day, WIN_ODDS_BSP FROM betfairdata) AS T3 ON T1.HorseName = T3.runnerName AND T1.DayCalender = T3.day SELECT WIN_ODDS_BSP FROM #temptable)
Conn1.commit()
This statement works within SQL yet not within VS
Statement also works if I drop the temp table components

Related

Problem when I try to import big database into SQL Azure with Python

I have a pretty weird problem, I am trying to extract with Python a SQL database in Azure.
Within this database, there are several tables (I explain this because you gonna see a "for" loop in the code).
I can import some tables without problem, others (the ones that take the longest, I suppose it is because size) fail.
Not only does it throw an error ( [1] 25847 killed / usr / bin / python3 ), but it directly kicks me out of the console.
Does anyone know why? Is there an easier way to calculate the size of the database without import the entire database with pd.read_sql ()?
code:
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
query = "SELECT * FROM INFORMATION_SCHEMA.TABLES"
df = pd.read_sql(query, cnxn)
df
DataConContenido = pd.DataFrame({'Nombre':[], 'TieneCon?':[],'Size':[]})
for tablas in df['TABLE_NAME']:
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
query = "SELECT * FROM " + tablas
print("vamos con "+ str(tablas))
try:
df = pd.read_sql(query, cnxn)
size=df.shape
if size[0] > 0:
DataConContenido= DataConContenido.append(dict(zip(['Nombre','TieneCon?','Size'],[tablas,True,size])),ignore_index=True)
else:
DataConContenido= DataConContenido.append(dict(zip(['Nombre','TieneCon?','Size'],[tablas,False,size])),ignore_index=True)
except:
pass
Could it be that the connection drops when it takes so long and that is why the error named above?
I think the process is getting killed in the below line :
DataConContenido= DataConContenido.append(dict(zip(['Nombre','TieneCon?','Size'],[tablas,True,size])),ignore_index=True)
You could double confirm by adding a print statement just above it.
print("Querying Completed...")
You are getting KILLED mainly because there is a probability that your process crossed some limit in the amount of system resources that you are allowed to use. This specific operation to me appears like one.
If possible you could query and append in batches rather than doing in one shot.

Pyhive Presto insert select * from not running

I can us PYHIVE to connect to PRESTO and select data back just fine. I am trying to use PYHIVE to run "insert into x select from y" on presto and it is not running. I am sure I am missing something simple.
from pyhive import presto
import requests
from requests.auth import HTTPBasicAuth
import pandas as pd
req_kw = {'auth': HTTPBasicAuth(user, pw),'verify':False}
conn = presto.connect(host=ht,port=prt,protocol='https',catalog='hive',username=user,requests_kwargs=req_kw)
cursor = conn.cursor()
query='select count(1) from dim.date_dim '
cursor.execute(query)
print(cursor.fetchall())
query='insert into flowersc.date_dim select * from dim.date_dim'
cursor.execute(query)
query='select count(1) from flowersc.date_dim '
cursor.execute(query)
print(cursor.fetchall())
no errors occur
but the results show no data loaded
[(16624,)]
[(0,)]
Any help is greatly appreciated.
You need to check (fetch) result in
query='insert into flowersc.date_dim select * from dim.date_dim'
cursor.execute(query).next() # added .next()
This is needed due to a change in Presto in May 2018 (https://github.com/prestosql/presto/commit/568449b8d058ed8281cc5277bb53902fd044cad7). But it's also a good practise to verify query results, i.e. check that your INSERT statement succeeds.

Error while trying to execute the query in Denodo using Python SQLAlchemy

I'm trying to get a table from Denodo using Python and sqlalchemy library. That's my code
from sqlalchemy import create_engine
import os
sql = """SELECT * FROM test_table LIMIT 10 """
engine = create_engine('mssql+pyodbc://DenodoODBC', encoding='utf-8')
con = engine.connect().connection
cursor = con.cursor()
cursor.execute(sql)
df = cursor.fetchall()
cursor.close()
con.close()
When I'm trying to run it for the first time I get the following error.
DBAPIError: (pyodbc.Error) (' \x10#', "[ \x10#] ERROR: Function 'schema_name' with arity 0 not found\njava.sql.SQLException: Function 'schema_name' with arity 0 not found;\nError while executing the query (7) (SQLExecDirectW)")
[SQL: SELECT schema_name()]
I think the problem might be with create_engine because when I'm trying to run the code for the second time without creating an engine again, everything is fine.
I hope somebody can explain me what is going on. Thanks :)

"No results. Previous SQL was not a query" when trying to query DeltaDNA with Python

I'm currently trying to query a deltadna database. Their Direct SQL Access guide states that any PostgreSQL ODBC compliant tools should be able to connect without issue. Using the guide, I set up an ODBC data source in windows
I have tried adding Set nocount on, changed various formats for the connection string, changed the table name to be (account).(system).(tablename), all to no avail. The simple query works in Excel and I have cross referenced with how Excel formats everything as well, so it is all the more strange that I get the no query problem.
import pyodbc
conn_str = 'DSN=name'
query1 = 'select eventName from table_name limit 5'
conn = pyodbc.connect(conn_str)
conn.setdecoding(pyodbc.SQL_CHAR,encoding='utf-8')
query1_cursor = conn.cursor().execute(query1)
row = query1_cursor.fetchone()
print(row)
Result is ProgrammingError: No results. Previous SQL was not a query.
Try it like this:
import pyodbc
conn_str = 'DSN=name'
query1 = 'select eventName from table_name limit 5'
conn = pyodbc.connect(conn_str)
conn.setdecoding(pyodbc.SQL_CHAR,encoding='utf-8')
query1_cursor = conn.cursor()
query1_cursor.execute(query1)
row = query1_cursor.fetchone()
print(row)
You can't do the cursor declaration and execution in the same row. Since then your query1_cursor variable will point to a cursor object which hasn't executed any query.

Error during saving query data into dataframe

I am trying to access sqlite db - test.db and running simple query "SELECT * FROM TABLE" and trying to save it in dataframe. It seems the code is fine as I searched and found similar codes that seem to work for others.
NOTE: I am running the code in Jupyter iNotebook.
import sqlite3
import pandas as pd
con = sqlite3.connect('test.db')
myFrames = pd.read_sql_query("SELECT * FROM TABLE", con)
I get error
Error OperationalError: near "TABLE": syntax error
(lots of lines in between)
DatabaseError: Execution failed on sql 'SELECT * FROM TABLE': near "TABLE": syntax error
Also, This piece prints out rows very well. So connection is working
conn = sqlite3.connect("test.db")
cur = conn.cursor()
for row in cur.execute("SELECT * FROM test_rank"):
print(row)
Table is a reserved keyword. Replace it with the real name of the table.

Categories

Resources