Connect to SQL Server using python from Raspberry pi - python

I am trying to connect to a SQL Server database using python.I have followed,
http://blog.tryolabs.com/2012/06/25/connecting-sql-server-database-python-under-ubuntu/
I have used following Python code to connect with the Microsoft SQL Server Management Studio 2014 with above setting.
import pyodbc
user='sa'
password='PC#1234'
database='climate'
port='1433'
TDS_Version='8.0'
server='192.168.1.146'
con_string= 'UID=%s;PWD=%s;DATABASE=%s;PORT=%s;TDS=%s;SERVER=%s;' %
(user,password, database,port,TDS_Version,server)
cnxn=pyodbc.connect(con_string)
cursor=cnxn.cursor()
cursor.execute("select * from mytable")
row=cursor.fetchone()
print row
I got following error,
Traceback (most recent call last):
File "sql.py", line 15, in <module>
cnxn=pyodbc.connect(con_string)
pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source
name not found, and no default driver specified (0) (SQLDriverConnect)')
I also have installed pymssql and tried to connect to SQL Server. For this I have used following python code,
import pymssql
connection=pymssql.connect(user='sa',password='PC#1234',
host='192.168.1.146',database='climate',as_dict=True)
cursor=connection.cursor()
cursor.execute('select * from mytable;')
rows=cursor.fetchall()
I have got following error,
connection=pymssql.connect(user='sa',password='PC#1234',
host='192.168.1.146',database='climate',as_dict=True)
File "/usr/lib/pymodules/python2.7/pymssql.py", line 607, in connect
raise OperationalError, e[0]
pymssql.OperationalError: DB-Lib error message 20009, severity 9:
Unable to connect: Adaptive Server is unavailable or does not exist
Net-Lib error during Operation now in progress Error 115
- Operation now in progress
what is the reason for showing data source name not found and adaptive server is not available?

Related

pyodbc connection within SQL Server 2017

This is my first post on stackoverflow, so please bear with me if I doing something wrong.
I'm currently trying to achieve a Python script which reads data from a CSV file, transforms it into a JSON Object and stores it in an SQL Server table. Everything is working fine if I do this directly in Python, I have a fully working Python script which reads the CSV and stores the data via pyodbc on SQL Server.
Unfortunately, when I try to use a similar script in sp_execute_external_script I get an error that the connection could not get established.
My T-SQL code:
DECLARE #Python as nvarchar(max)
SET #Python = N'
import pyodbc
import datetime as datetime
conn_str = (
r''DRIVER={ODBC Driver 17 for SQL Server};''
r''SERVER=xxx.xxx.xxx.xxx;''
r''DATABASE=xxxx;''
r''UID=xxxxxx;''
r''PWD=xxxx;''
)
cnxn = pyodbc.connect(conn_str)
'
EXEC sp_execute_external_script
#language = N'Python',
#script = #Python ,
#input_data_1 = N'',
#input_data_1_name = N''
Error message
Meldung 39004, Ebene 16, Status 20, Zeile 2 Unerwarteter
"Python"-Skriptfehler beim Ausführen von "sp_execute_external_script"
mit HRESULT 0x80004004. Meldung 39019, Ebene 16, Status 2, Zeile 2
Externer Skriptfehler:
Error in execution. Check the output for more information. Traceback
(most recent call last): File "", line 5, in File
"E:\Program Files\Microsoft SQL
Server\MSSQL14.CWDEV\MSSQL\ExtensibilityData\CWDEV01\6F73A5E0-4F82-4FEA-A5DA-7A8E7D8778D2\sqlindb.py",
line 53, in transform
cnxn = pyodbc.connect(conn_str) pyodbc.Error: ('08001', '[08001] [Microsoft][ODBC Driver 17 for SQL Server]Named Pipes-Anbieter: Es
konnte keine Verbindung zu SQL Server hergestellt werden [1326].
(1326) (SQLDriverConnect)')
SqlSatelliteCall error: Error in execution. Check the output for more
information. STDOUT-Meldung(en) aus dem externen Skript:
SqlSatelliteCall function failed. Please see the console output for
more information. Traceback (most recent call last): File
"E:\Program Files\Microsoft SQL
Server\MSSQL14.CWDEV\PYTHON_SERVICES\lib\site-packages\revoscalepy\computecontext\RxInSqlServer.py",
line 406, in rx_sql_satellite_call
rx_native_call("SqlSatelliteCall", params) File "E:\Program Files\Microsoft SQL
Server\MSSQL14.CWDEV\PYTHON_SERVICES\lib\site-packages\revoscalepy\RxSerializable.py",
line 291, in rx_native_call
ret = px_call(functionname, params) RuntimeError: revoscalepy function failed.
At the moment I'm just trying to make a connection to the destination server. Btw, the code is not running on the destination server, it will be executed on a different server. My idea is to use sp_execute_external_script with Python on a particular SQL Server to migrate data out of flat files and to store it on different destination SQL Servers.
Any advice will highly appreciated.
Many thanks
I figured it out.
There was a outgoing rule in windows firewall which blocks the network access for pyodbc connection.
firewall outgoing rules
After disabling it, everything worklike a charm.
Firewall rules for machine learning services is described here:
https://learn.microsoft.com/de-de/sql/machine-learning/security/firewall-configuration?view=sql-server-2016
Regards,

Using Python to connect to Impala database (thriftpy error)

What I'm trying to do is very basic: connect to an Impala db using Python:
from impala.dbapi import connect
conn = connect(host='impala', port=21050, auth_mechanism='PLAIN')
I'm using Impyla package to do so. I got this error:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/thriftpy/transport/socket.py", line 96, in open
self.sock.connect(addr)
socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/alaaeddine/PycharmProjects/test/data_test.py", line 3, in <module>
conn = connect(host='impala', port=21050, auth_mechanism='PLAIN')
File "/usr/local/lib/python3.6/dist-packages/impala/dbapi.py", line 147, in connect
auth_mechanism=auth_mechanism)
File "/usr/local/lib/python3.6/dist-packages/impala/hiveserver2.py", line 758, in connect
transport.open()
File "/usr/local/lib/python3.6/dist-packages/thrift_sasl/__init__.py", line 61, in open
self._trans.open()
File "/usr/local/lib/python3.6/dist-packages/thriftpy/transport/socket.py", line 104, in open
message="Could not connect to %s" % str(addr))
thriftpy.transport.TTransportException: TTransportException(type=1, message="Could not connect to ('impala', 21050)")
Tried also the Ibis package but failed with the same thriftpy related error.
In Windows using Dbeaver, I could connect to the database using the official Cloudera JDBC connector. My questions are:
Should pass my JDBC connector as parameter in my connect code? I have made some search I could not find something pointing at this direction.
Should I try something else than Ibis and Impyla packages? I had experienced a lot of version related issues and dependencies when using them. If yes, what would you recommend as alternatives?
Thanks!
Solved:
I used pyhive package instead of Ibis/Impyla. Here's an example:
#import hive from pyhive
from pyhive import hive
#establish the connection to the db
conn = hive.Connection(host='host_IP_addr', port='conn_port', auth='auth_type', database='my_db')
#prepare the cursor for the queries
cursor = conn.cursor()
#execute a query
cursor.execute("SHOW TABLES")
#navigate and display the results
for table in cursor.fetchall():
print(table)
Your impala domain name must not be resolving. Are you able to do nslookup impala in command prompt? If you're using Docker, you need to have the docker service name in docker-compose as "impala" or have "extra_hosts" option. Or you can always add it to /etc/hosts (Windows/Drivers/etc/hosts) as impala 127.0.0.1
Also try 'NOSASL' instead of PLAIN sometimes that works better with security turned off.
This is the simple method, connecting impala through impala shell using python.
import commands
import re
query1 = "select * from table_name limit 10"
impalad = str('hostname')
port = str('21000')
database = str('database_name')
result_string = 'impala-shell -i "'+ impalad+':'+port +'" -k -B --delimited -q "'+query1+'"'
status, output = commands.getstatusoutput(result_string)
print output
if status == 0:
print output
else:
print "Error encountered while executing HiveQL queries."

How do I access AS/400 using SQLAlchemy?

Short version: Please tell me how to connect to AS/400s via SQLAlchemy.
Long version
My ultimate goal is to join data from SQL Server and AS/400 to be displayed in a Flask Python application. My approach has been to get the data from each database into Pandas dataframes, which can then be joined and output as JSON. If anyone has a better approach, feel free to leave a comment. The problem with the way I'm trying to do this is that Pandas.read_sql_query() relies on SQLAlchemy, and getting SQLAlchemy to work with AS/400 is proving quite difficult.
The AS/400 is version 7.2, though another I will likely try to connect to is version 5.1.
I'm trying to access it from my computer, which is running Windows 7 and has i Access 7.1, Python 2.7, and Python modules including pyodbc and ibm_db_sa.
Without sqlalchemy, pyodbc works just fine:
CONNECTION_STRING = (
"driver={iSeries Access ODBC Driver};"
"system=ip_address;"
"database=database_name;"
"uid=username;"
"pwd=password;"
)
pyodbc.connect(CONNECTION_STRING)
# Queries work fine after this.
I've read these resources, among others, and tried to apply their techniques:
https://pypi.org/project/ibm_db_sa/
Connecting to IBM AS400 server for database operations hangs
SqlAlchemy equivalent of pyodbc connect string using FreeTDS
Below are some of the failed attempts and corresponding error messages that I've collected. I don't know what to put for the first part ("something+something//..."), which port to specify (446? 8471? something else? nothing?), whether to use the server's name or IP address, or whether to use the connection-string style argument for create_engine(), so I've just been trying every combination I can think of. I tried modifying the AS400Dialect_pyodbc class as suggested in the second link above, after which I tried rerunning some of the failed attempts again. I may keep trying things, but I'm just spinning my wheels at this point.
from sqlalchemy import create_engine
CONNECTION_STRING = (
"driver={iSeries Access ODBC Driver};"
"system=ip_address;"
"database=database_name;"
"uid=username;"
"pwd=password;"
)
create_engine('ibm_db_sa+pyodbc://username:password#ip_address:446/database_name').connect()
Exception has occurred: sqlalchemy.exc.InterfaceError
(pyodbc.InterfaceError) ('IM002', u'[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') (Background on this error at: http://sqlalche.me/e/rvf5)
File "C:\Git\dashboards\web_app\pandas db2 test.py", line 43, in
create_engine('ibm_db_sa://username:password#ip_address:446/database_name').connect()
Exception has occurred: sqlalchemy.exc.OperationalError
(ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL30061N The database alias or database name "database_name " was not found at the remote node. SQLSTATE=08004\r SQLCODE=-30061 (Background on this error at: http://sqlalche.me/e/e3q8)
File "C:\Git\dashboards\web_app\pandas db2 test.py", line 43, in
create_engine('ibm_db_sa://username:password#server_name:446/database_name').connect()
Exception has occurred: sqlalchemy.exc.OperationalError
(ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL1336N The remote host "server_name" was not found. SQLSTATE=08001\r SQLCODE=-1336 (Background on this error at: http://sqlalche.me/e/e3q8create_engine('ibm_db_sa://username:password#ip_address:446/server_name.database_name').connect()
create_engine('ibm_db_sa://username:password#ip_address:446/server_name.database_name').connect()
Exception has occurred: sqlalchemy.exc.OperationalError
(ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL30061N The database alias or database name "server_name.database_name " was not found at the remote node. SQLSTATE=08004\r SQLCODE=-30061 (Background on this error at: http://sqlalche.me/e/e3q8)
create_engine('db2+ibm_db://username:password#ip_address:446/server_name.database_name').connect()
Exception has occurred: sqlalchemy.exc.OperationalError
(ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL30061N The database alias or database name "server_name.database_name " was not found at the remote node. SQLSTATE=08004\r SQLCODE=-30061 (Background on this error at: http://sqlalche.me/e/e3q8)
File "C:\Git\dashboards\web_app\pandas db2 test.py", line 45, in
create_engine('db2+ibm_db://username:password#ip_address:446/database_name').connect()
Exception has occurred: sqlalchemy.exc.OperationalError
(ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL30061N The database alias or database name "database_name " was not found at the remote node. SQLSTATE=08004\r SQLCODE=-30061 (Background on this error at: http://sqlalche.me/e/e3q8)
File "C:\Git\dashboards\web_app\pandas db2 test.py", line 45, in
create_engine('db2+ibm_db://username:password#ip_address/database_name').connect()
Exception has occurred: sqlalchemy.exc.OperationalError
(ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "ip_address". Communication function detecting the error: "connect". Protocol specific error code(s): "10061", "", "". SQLSTATE=08001\r SQLCODE=-30081 (Background on this error at: http://sqlalche.me/e/e3q8)
create_engine('db2+ibm_db://username:password#server_name:446/database_name').connect()
Exception has occurred: sqlalchemy.exc.OperationalError
(ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL1336N The remote host "server_name" was not found. SQLSTATE=08001\r SQLCODE=-1336 (Background on this error at: http://sqlalche.me/e/e3q8)
create_engine('db2+ibm_db://username:password#server_name/database_name').connect()
Exception has occurred: sqlalchemy.exc.OperationalError
(ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL1336N The remote host "server_name" was not found. SQLSTATE=08001\r SQLCODE=-1336 (Background on this error at: http://sqlalche.me/e/e3q8)
create_engine('db2+pyodbc://username:password#ip_address:446/database_name').connect()
Exception has occurred: sqlalchemy.exc.InterfaceError
(pyodbc.InterfaceError) ('IM002', u'[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') (Background on this error at: http://sqlalche.me/e/rvf5)
File "C:\Git\dashboards\web_app\pandas db2 test.py", line 45, in
create_engine('db2://username:password#ip_address:446/database_name').connect()
Exception has occurred: sqlalchemy.exc.OperationalError
(ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL30061N The database alias or database name "database_name " was not found at the remote node. SQLSTATE=08004\r SQLCODE=-30061 (Background on this error at: http://sqlalche.me/e/e3q8)
File "C:\Git\dashboards\web_app\pandas db2 test.py", line 45, in
quoted = urllib.quote_plus(CONNECTION_STRING)
create_engine('ibm_db_sa+pyodbc:///?odbc_connect={}'.format(quoted)).connect()
Unable to open 'hashtable_class_helper.pxi': File not found
(file:///c:/git/dashboards/pandas/_libs/hashtable_class_helper.pxi).
quoted = urllib.quote_plus(CONNECTION_STRING)
create_engine('ibm_db_sa:///?odbc_connect={}'.format(quoted)).connect()
Exception has occurred: sqlalchemy.exc.InterfaceError
(ibm_db_dbi.InterfaceError) ibm_db_dbi::InterfaceError: connect expects the first five arguments to be of type string or unicode (Background on this error at: http://sqlalche.me/e/rvf5)
File "C:\Git\dashboards\web_app\pandas db2 test.py", line 43, in
quoted = urllib.quote_plus(CONNECTION_STRING)
create_engine('ibm_db:///?odbc_connect={}'.format(quoted)).connect()
Exception has occurred: sqlalchemy.exc.NoSuchModuleError
Cant load plugin: sqlalchemy.dialects:ibm_db
quoted = urllib.quote_plus(CONNECTION_STRING)
create_engine('db2:///?odbc_connect={}'.format(quoted)).connect()
Exception has occurred: sqlalchemy.exc.InterfaceError
(ibm_db_dbi.InterfaceError) ibm_db_dbi::InterfaceError: connect expects the first five arguments to be of type string or unicode (Background on this error at: http://sqlalche.me/e/rvf5)
File "C:\Git\dashboards\web_app\pandas db2 test.py", line 45, in
quoted = urllib.quote_plus(CONNECTION_STRING)
create_engine('db2+ibm_db:///?odbc_connect={}'.format(quoted)).connect()
Exception has occurred: sqlalchemy.exc.InterfaceError
(ibm_db_dbi.InterfaceError) ibm_db_dbi::InterfaceError: connect expects the first five arguments to be of type string or unicode (Background on this error at: http://sqlalche.me/e/rvf5)
quoted = urllib.quote_plus(CONNECTION_STRING)
create_engine('db2+ibm_db_sa:///?odbc_connect={}'.format(quoted)).connect()
Exception has occurred: sqlalchemy.exc.NoSuchModuleError
Cant load plugin: sqlalchemy.dialects:db2.ibm_db_sa
I finally got it working, though it's a bit awkward. I created a blank file in my project to appease this message that I was receiving in response to one of the attempts shown in my question:
Unable to open 'hashtable_class_helper.pxi': File not found (file:///c:/git/dashboards/pandas/_libs/hashtable_class_helper.pxi).
(My project folder is C:/Git/dashboards, so I created the rest of the path.)
With that file present, the code below now works for me. engine.connect() works, but I ran an actual query for further verification that it was working. For the record, it seems to work regardless of whether the ibm_db_sa module is modified as suggested in one of the links in my question, so I would recommend leaving that module alone. Note that although they aren't imported by directly, you need these modules installed: pyodbc, ibm_db_sa, and possibly future (I forget).
import urllib
import pandas as pd
from sqlalchemy import create_engine
CONNECTION_STRING = (
"driver={iSeries Access ODBC Driver};"
"system=ip_address;"
"database=database_name;"
"uid=username;"
"pwd=password;"
)
SQL= """\
SELECT
MPBASE AS BASEPA,
COALESCE(SUM(MPQTY), 0) AS PWIP
FROM FUTMODS.MPPROD
WHERE MPOPT <> '*'
GROUP BY MPBASE
"""
quoted = urllib.quote_plus(CONNECTION_STRING)
engine = create_engine('ibm_db_sa+pyodbc:///?odbc_connect={}'.format(quoted))
df = pd.read_sql_query(
SQL,
engine,
index_col='basepa'
)
print df

Python pyodbc cursor execution fails on Teradata

I have a Python script which runs successfully from my Windows workstation and I am trying to migrate it to a Unix server. The script connects to a Teradata database using pyodbc package and executes a bunch of queries. When it is execute from the server, it triggers the following error message:
Error: ('HY000', 'The driver did not supply an error!')
I am able to consistently reproduce the error with the following code snippet executed on the server:
import pyodbc
oConnexion = pyodbc.connect("Driver={Teradata};DBCNAME=myserver;UID=myuser;PWD=mypassword", autocommit=True)
print("Connected")
oCursor = oConnexion.cursor()
oCursor.execute("select 1")
print("Success")
Configuration:
Python 3.5.2
Pyodbc 3.1.2b2
UnixODBC Driver Manager
Teradata 15.10
After enabling ODBC logging and running a simple SELECT query, I have noticed the following Invalid cursor GeTypeInfo errors:
Data Type = SQL_VARCHAR
[ODBC][57920][1481847636.278776][SQLGetTypeInfo.c][190]Error: 24000
[ODBC][57920][1481847636.278815][SQLGetTypeInfo.c][168]
Entry:
Statement = 0x1bc69e0
Data Type = Unknown(-9)
[ODBC][57920][1481847636.278839][SQLGetTypeInfo.c][190]Error: 24000
[ODBC][57920][1481847636.278873][SQLGetTypeInfo.c][168]
Entry:
Statement = 0x1bc69e0
Data Type = SQL_BINARY
[ODBC][57920][1481847636.278896][SQLGetTypeInfo.c][190]Error: 24000
Also, trying to list the connection attributes using the following code:
for attr in vars(pyodbc):
print (attr)
value = oConnexion.getinfo(getattr(pyodbc, attr))
print('{:<40s} | {}'.format(attr, value))
Fails with:
SQL_DESCRIBE_PARAMETER
Traceback (most recent call last):
File "test.py", line 28, in <module>
value = oConnexion.getinfo(getattr(pyodbc, attr))
pyodbc.Error: ('IM001', '[IM001] [unixODBC][Driver Manager]Driver does not support this function (0) (SQLGetInfo)')
Upgrading to the last (unreleased) version of pyodbc (v4) solved the issue.
https://github.com/mkleehammer/pyodbc/tree/v4

How do I display the hosts inside the Google Chrome sqlite3 "cookie" database using Python

I'm using Python to access the "cookie" chrome sqlite3 db to retrieve the host keys, but getting error below
import sqlite3
conn = sqlite3.connect(r"C:\Users\tikka\AppData\Local\Google\Chrome\User Data\Default\Cookies")
cursor = conn.cursor()
cursor.execute("select host_key from cookies")
results = cursor.fetchall()
print results
conn.close()
Error
Traceback (most recent call last):
File "C:\Python27\cookies.py", line 4, in <module>
cursor.execute("select host_key from cookies")
DatabaseError: malformed database schema (is_transient) - near "where": syntax error
>>>
thanks to link provided by alecxe was able to fix it by upgrading sqlite3 version from 3.6.21 to 3.9.2. I upgraded by downloading new version from this site and placing the dll in C:\Python27\DLLs

Categories

Resources