I am having a difficult time trying to connect to a SQL Server DB on Linux, using pyodbc. I have a ODCINI file entry created. I started with this:
import pyodbc
conn = pyodbc.connect('DSN=DSN;Database=DB;UID=UID;PWD=PWD')
cursor = conn.cursor()
cursor.execute('SELECT count(*) FROM dbo.tableA')
for row in cursor.fetchall():
print(row)
which throws this error:
RuntimeError: Unable to set SQL_ATTR_CONNECTION_POOLING attribute.
I googled that error and added this line after reading some recommendations:
pyodbc.pooling=False
So script changed to this:
import pyodbc
pyodbc.pooling=False
conn = pyodbc.connect('DSN=DSN;Database=DB;UID=UID;PWD=PWD')
cursor = conn.cursor()
cursor.execute('SELECT count(*) FROM dbo.tableA')
for row in cursor.fetchall():
print(row)
Which resulted in this:
pyodbc.InterfaceError: ('IM003', '[IM003] 䑛瑡䑡物捥嵴佛䉄⁃楬嵢匠数楣楦摥搠楲敶\u2072潣汵\u2064潮⁴敢氠慯敤d\uffff\uffff㢸ꔻ罱\x00\ue5b8鮫罱\x00㳰ꔻ罱\x00\uffff\uffff罱\x00\x00\x00\x00\x00鳭ꕞ罱\x00塰ꕉ罱 (0) (SQLDriverConnect)')
At the suggestion of a coworker I added these 2 lines AFTER the pyodbc.connect line:
conn.setdecoding(pyodbc.SQL_CHAR, encoding='latin1', to=str)
conn.setencoding(str, encoding='latin1')
I tried that with both latin1 and utf-8. Neither work, still throws the same interface error with Chinese characters.
Any ideas?
I had the similar issue with same description RuntimeError: Unable to set SQL_ATTR_CONNECTION_POOLING attribute. I had no clue what is happening and why is happening. After lot of debugging i was able to figure it out why.
Simple answer is :
Reinstall the unixODBC drivers or/and SQL drivers.
Reason why :
When install the ODBC Drivers first and then SQL related drivers, sometimes it override the symlinks in Unix system. You can find out more info on this from pyodbc official GitHub issue#847 .
you can simply uninstall and then do:
conda install unixodbc
Related
I am connecting a Python program in Visual Studio Code to a SQL databse stored in MySQL Workbench 8.0. I am using the PyMySQL connector to do this. However, I am running into an error with code that I have used from another question that I have posted. Here is the link and the code:
How do I connect a Python program in Visual Studio Code to MySQL Workbench 8.0?
pip install PyMySQL
import pymysql
con = pymysql.Connect(
host='localhost',
port=3306,
user='root',
password='123456',
db='test',
charset='utf8'
)
cur = con.cursor()
sql1 = 'select * from student'
cur.execute(sql1)
data = cur.fetchall()
cur.close()
con.close()
for i in data:
print(str(i))
Here is a screenshot with my code and the error that I received.
I tried the code that I recieved from my previous question, but it resulted in an another error. I am pretty sure I have copied the code correctly and the database details. I have researched the error but have been unable to find its relevance to connecting Python programs to MYSQL Workbench 8.0 with PyMySQL.
First of all, obviously you didn't copy the code in the answer correctly.
Your code has the following errors (only from the picture, I don't know what your complete code looks like)
The port number is 3306. NOT 33060. Of course, if you make changes when you install the database, you need to change it to the port number you use.
The fetchall method in data = cur.fetchall does parentheses. It shoud be data = cur.fetchall().
At the moment it seems that the error in the picture is due to the port number.
Modifying to the correct port number will remove this error.
I am using Impyla and Python in the CDSW to query data in HDFS and use it. The problem is sometimes to get all of the data I have to go in and manually click on the "Invalidate all metadata and rebuild index" button in HUE.
Is there a way to do this in the workbench with a library or python code?
I assume you are using something like this to connect to impala via impyla ... try executing the invalidate metadata <table_name> command
from impala.dbapi import connect
conn = connect(host='my.host.com', port=21050)
cursor = conn.cursor()
cursor.execute('INVALIDATE METADATA mytable') # run this
cursor.execute('SELECT * FROM mytable LIMIT 100')
print cursor.description # prints the result set's schema
results = cursor.fetchall()
I have written the following short and simple script in order to connect a MySQL database called mybase with Python. I have populated already the table users of my database with data in MySQL Workbench. The problem is that when I run the script, I see no results printed in the console, but only my cmd opening for one second and closing automatically. Could someone help me find out what I am doing wrong? This is my script:
import mysql.connector as mysql
db = mysql.connect(host='localhost',
database='mybase',
user='root',
password='xxx',
port= 3306)
cursor = db.cursor()
q= "SELECT*FROM users"
cursor.execute(q)
for row in cursor.fetchall():
print (row[0])
I appreciate any help you can provide!
Maybe its a syntax error with your query. It should be like this to work
q= "SELECT * FROM users"
If it does not work, what I found helps me it is to test the queries first in a client with a local database and then copying them into my code.
c.execute('select sum(unused), sum(pgsize), sum(payload), count(*) from dbstat')
or
c.execute('select sum(unused), sum(pgsize), sum(payload), count(*) from main.dbstat')
I'm using sqlite3 database, and I'm trying to get the statistics of the database from the dbstat table. This line works fine on Linux not on Window. In both cases I made sure that I'm using the same sqlite3 version and the same python3 version. I would love to know why this doesn't work on windows.
Error:
c.execute('select sum(unused), sum(pgsize), sum(payload), count(*) from dbstat') sqlite3.OperationalError: no such table: dbstat
#Shawn ... ok, figured out what was going on. Python on Windows has a different sqlite3.dll than Python on Linux. The one on Windows didn't have the SQLITE_ENABLE_DBSTAT_VTAB. To make it work, you can compile the sqlite3.dll yourself from the source code like #Shawn said or you can download the compiled dll from their website, where it has the option enabled, and add it do the DLLs folder in the python director.
You can check the sqlite3 compiled options by calling
PRAGMA compile_options;
If you want to check the python sqlite3.dll run this python script
import sqlite3
conn = sqlite3.connect('test.db')
c = conn.cursor()
c.execute('PRAGMA compile_options;')
available_pragmas = c.fetchall()
print(available_pragmas)
conn.close()
Python3.7 output will look like this
[('COMPILER=msvc-1916',), ('ENABLE_FTS4',), ('ENABLE_FTS5',), ('THREADSAFE=1',)]
I know this kind of question has been asked before but still couldn't find the answer I'm looking for. I'm doing bulk insert of the csv file into the SQL Server table but I am getting error shown below:
My Code:
df_output.to_csv('new_file_name.csv', sep=',', encoding='utf-8')
conn = pyodbc.connect(r'DRIVER={SQL Server}; PORT=1433; SERVER=Dev02; DATABASE=db;UID='';PWD='';')
curr = conn.cursor()
print("Inserting!")
curr.execute("""BULK INSERT STG_CONTACTABILITY_SCORE
FROM 'C:\\Users\\kdalal\\callerx_project\\caller_x\\new_file_name.csv'
WITH
(
CODEPAGE = 'ACP',
FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
);""")
conn.commit()
The Error:
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC SQL
Server Driver][SQL Server]Cannot bulk load because the file
"C:\Users\kdalal\callerx_project\caller_x\new_file_name.csv"
could not be opened. Operating system error code 3(The system cannot
find the path specified.). (4861) (SQLExecDirectW)')
'new_file_name.csv' is in the specified path. I tried changing the path to just 'new_file_name.csv' since it is in the folder from where I am running the script still it throws a
file does not exists
Can you please tell me what am I doing wrong here. Thanks a lot in advance.
The BULK INSERT statement is executed on the SQL Server machine, so the file path must be accessible from that machine. You are getting "The system cannot find the path specified" because the path
C:\\Users\\kdalal\\callerx_project\\caller_x\\new_file_name.csv
is a path on your machine, not the SQL Server machine.
Since you are dumping the contents of a dataframe to the CSV file you could simply use df.to_sql to push the contents directly to the SQL Server without an intermediate CSV file. To improve performance you can tell SQLAlchemy to use pyodbc's fast_executemany option as described in the related question
Speeding up pandas.DataFrame.to_sql with fast_executemany of pyODBC