Connect to a Sybase database in Python without a DSN? - python

I'm trying to connect to a Sybase database in Python (using the python-sybase DBAPI and sqlalchemy module), and I'm currently receiving the following error:
ct_connect(): directory service layer: internal directory control layer error: There was an error encountered while binding to the directory service
Here's the code:
import sqlalchemy
connect_url = sqlalchemy.engine.url.URL(drivername='pysybase', username='read_only', password='*****', host='hostname', port=9000, database='tablename', query=None)
db = sqlalchemy.create_engine(connect_url)
connection = db.connect()
I've also tried to connect without sqlalchemy - ie, just importing the Python Sybase module directly and attempting to connect, but I still get the same error.
I've done quite a bit of googling and searching here on SO and at the doc sites for each of the packages I'm using. One common suggestion was to verify the DSN settings, as that's what's causing ct_connect() to trip up, but I am able to connect to and view the database in my locally-installed copy of DBArtisan just fine, and I believe that uses the same DSN.
Perhaps I should attempt to connect in a way without a DSN? Or is there something else I'm missing here?
Any ideas or feedback are much appreciated, thank you!

I figured out the issue for anyone else who might be having a similar problem.
Apparently, even though I had valid entries for the hostname in my sql.ini file and DSN table, Sybase was not reading it correctly - I had to open DSEdit (one of the tools that comes with Sybase) and re-enter the server/hostname info.

Related

pyodbc + PostgreSQL ODBC: connection string works, DSN doesn't

I am trying to connect to a Postgres DB built for a Django app. I can connect fine in windows, but when we moved it over to a Linux server for production it stopped working. I tracked it down to pyodbc not working. So in a separate script, I have been trying to get a connection working with no luck. I'm pretty sure the Linux server is running Redhat (yum is the install, but I can double check if it matters)
Here are some of the things I have tried:
installed unixODBC-devel
added a DSN to the user sourcename /home/localUsername/.odbc.ini file as follows:
[DSNName]
Description=Postgres Connection to Database
Driver=/usr/pgsql-10/lib/psqlodbc.so
Server=servername
Database=dbname
PWD=pass
UID=username
Running odbcinst -q -d returns:
[PostgreSQL]
python script I have tried (although using interpreter for now)
con = odbc.connect("DSN=DSNName")
con = odbc.connect("Driver={PostgreSQL};Uid=username;Pwd=pass; Server=servername;Port=5432)
con = odbc.connect("Driver={PostgreSQL Unicode(x64)};Uid=username;Pwd=pass; Server=servername;Port=5432)
I get one of three errors depending on which driver I try:
For the Driver using Unicode(x32) I get:
pyodbc.Error ('01000', "[01000] [unixODBC][Driver Manager]can't open lib 'PostgreSQL Unicode(x32)' : file not found ...
I figure that means this driver is not installed which is fine.
For the DSN approach I get:
pyodbc.OperationalError: ('08001', '[08001] FATAL: role:"localUsername" does not exists\n (101) (SQLDriverConnect)')
This second error seems to make me think (maybe incorrectly) that it is trying to use my localUsername to authenticate to Postgres, when I want to use a special admin username that was setup for the host for now.
For the third option (PostgreSQL):
pyodbc.OperationalError 08001 FATAL: database "dbname" does not exist
I don't understand why that might be? My first thought is Linux wants to use a different port for connection. Locally on windows I can use the 5432 port and it worked fine. So I'm at a loss on how to get it to find the DB assuming the rest is working okay.
If you need additional details let me know and I'll try to add them.
Edit:
Have python (and Django) on one server. DB is on another.
Tried running psql -h OSServername -U 'username' with the same: role error/DB not found errors. I feel like I must be needing something after OSServername like 'OsServername/pgAdminServer' but that didn't work
where db 'username' is found by right clicking inside of pgAdmin one of the DB server names and selecting properties. Are the Server names inside pgAdmin different and do I need to somehow use the pgAdmin Server Name as part of the connection string?
As the comments suggest, starting with the psql -h command seems like a good place to start as it gets rid of the python complexity. Once I can get that command working, I might be able to fix the rest. What do I type when my Linux server name (Host name) is 'LinuxName', pgAdmin Server is 'pgAdminServer', the actual DB has a name 'dbName', and the pgAdmin username is 'username'. 'dbName' has an owner 'owner' which is different from the username of the pgServer as well as different from the Linux username I am signed in as. I also validated that the 'pgAdminServer' shows port 5432, so that shouldn't be the issue.
Edit 2:
I got the pyodbc.connect('Driver={PostgreSQL};Server=servNm;Uid=uid;pwd=pwd;Database=db') to work.
Now just need the last step for the DSN approach. Your dump_dsn worked to find a typo in my dsn file (.odbc.ini in my local home directory). So that helped. Still not finding the DB.
File in: /etc/odbcinst.ini list the following drivers which I have tried all three in my DSN file:
/usr/pgsql-10/lib/psqlodbc.so
/usr/pgsql-10/lib/psqlodbca.so
/usr/pgsql-10/lib/psqlodbcw.so
here is the info again from my .odbc.ini file in home/user/.odbc.ini:
variables: servNm, uid, db, and pwd match exactly with those found in my pyodbc.connect() string now working.
[DSNName]
Description=Postgres Connection to Database
Driver=/usr/pgsql-10/lib/psqlodbc.so
Server=servNm
CommLog=0
Debug=0
Fetch=100
UniqueIndex=1
UseDeclareFetch=0
Database=db
UID=uid
Username=uid
PWD=pwd
ReadOnly=0
Deleting and re-creating the ~/.odbc.ini file appears to have resolved the issue. This makes us suspect that there were one or more unusual characters in the previous version of that file that were causing strange behaviour.
One possible source of such troublesome (and sometimes even invisible!) characters is when copying text from the web. If we copy the following from our web browser …
DRIVER=PostgreSQL Unicode
… and paste it into a blank document in our text editor with the default encoding UTF-8 then everything looks normal. However, if we save that file (as UTF-8) and open it in a hex editor we can see that the space is not a normal space (U+0020) …
… it is a NO-BREAK SPACE (a.k.a. "NBSP", U+00A0, \xc2\xa0 in UTF-8 encoding) so the chances are very good that we would get an error when trying to use that DSN because b'PostgreSQL\xc2\xa0Unicode' is not the same as b'PostgreSQL Unicode'.
I had the exact same problem. I looked for several solutions, but none worked.
The problem was solved more easily than I thought:
1 - Remove all packages about postgresodbc:
$ sudo apt-get remove odbc-postgresql
2 - Install two libs, in the same order below:
$ apt-get install libcppdb-postgresql0 odbc-postgresql
Enjoy!
Doing so worked perfectly here.

Oracle incorrectly looking in TNS Names: TNS:could not resolve the connect identifier specified

I'm trying to make a connection to an oracle database with cx_Oracle but am getting the following error message:
ORA-12154: TNS:could not resolve the connect identifier specified
I'm using a connection string such as this one:
'xxxx/pw#lonod-com:1221/LNOUND_USER.uk.something.com'
The connection string is definitely correct as it is working from a different computer on the same network. I can also connect to the database when using Oracle SQL Developer, it's simply not working from Python.
I suspect that for some reason it keeps looking for a TNS Name entry, which I am not using. Is there a flag somewhere that could cause cx_Oracle to keep looking for a TNS name entry or what else could be causing this problem?
I have seen this occur if you have a sqlnet.ora configuration file that does not include the EZCONNECT option in the names.directory_path configuration variable. Below are a few ways to check what you are using. You can also test this connection string with SQL*Plus -- if it works with SQL*Plus it will work with cx_Oracle as well.
1) If you have the environment variable TNS_ADMIN set, its value indicates where Oracle searches for configuration files. If not and you have a full Oracle client installed it will look inside $ORACLE_HOME/network/admin
2) If you have a full Oracle client installed you can also use the tnsping utility to determine what Oracle is using and from what configuration files it is reading.
3) If you have a sqlnet.ora file in the location Oracle is searching for configuration files, then look for the names.directory_path= line in the file. If it is found, it needs to look something like this:
names.directory_path = (TNSNAMES, EZCONNECT)
Hope that helps!

Connecting to IBM AS400 server for database operations hangs

I'm trying to talk to an AS400 in Python. The goal is to use SQLAlchemy, but when I couldn't get that to work I stepped back to a more basic script using just ibm_db instead of ibm_db_sa.
import ibm_db
dbConnection = ibm_db.pconnect("DATABASE=myLibrary;HOSTNAME=1.2.3.4;PORT=8471;PROTOCOL=TCPIP;UID=username;PWD=password", "", "") #this line is where it hangs
print ibm_db.conn_errormsg()
The problem seems to be the port. If I use the 50000 I see in all the examples, I get an error. If I use 446, I get an error. The baffling part is this: if I use 8471, which IBM says to do, I get no error, no timeout, no response whatsoever. I've left the script running for over twenty minutes, and it just sits there, doing nothing. It's active, because I can't use the command prompt at all, but it never gives me any feedback of any kind.
This same 400 is used by the company I work for every day, for logging, emailing, and (a great deal of) database usage, so I know it works. The software we use, which talks to the database behind the scenes, runs just fine on my machine. That tells me my driver is good, the network settings are right, and so on. I can even telnet into the 400 from here.
I'm on the SQLAlchemy and ibm_db email lists, and have been communicating with them for days about this problem. I've also googled it so much I'm starting to run out of un-visited links in my search results. No one seems to have the problem of the connection hanging indefinitely. If there's anything I can try in Python, I'll try it. I don't deal with the 400 directly, but I can ask the guy who does to check/configure whatever I need to. As I said though, several workstations can talk to the 400's database with no problems, and queries run against the library I want to access work fine, if run from the 400 itself. If anyone has any suggestions, I'd greatly appreciate hearing them. Thanks!
The README for ibm_db_sa only lists DB2 for Linux/Unix/Windows in the "Supported Database" section. So it most likely doesn't work for DB2 for i, at least not right out of the box.
Since you've stated you have IBM System i Access for Windows, I strongly recommend just using one of the drivers that comes with it (ODBC, OLEDB, or ADO.NET, as #Charles mentioned).
Personally, I always use ODBC, with either pyodbc or pypyodbc. Either one works fine. A simple example:
import pyodbc
connection = pyodbc.connect(
driver='{iSeries Access ODBC Driver}',
system='11.22.33.44',
uid='username',
pwd='password')
c1 = connection.cursor()
c1.execute('select * from qsys2.sysschemas')
for row in c1:
print row
Now, one of SQLAlchemy's connection methods is pyodbc, so I would think that if you can establish a connection using pyodbc directly, you can somehow configure SQLAlchemy to do the same. But I'm not an SQLAlchemy user myself, so I don't have example code for that.
UPDATE
I managed to get SQLAlchemy to connect to our IBM i and execute straight SQL queries. In other words, to get it to about the same functionality as simply using PyODBC directly. I haven't tested any other SQLAlchemy features. What I did to set up the connection on my Windows 7 machine:
Install ibm_db_sa as an SQLAlchemy dialect
You may be able to use pip for this, but I did it the low-tech way:
Download ibm_db_sa from PyPI.
As of this writing, the latest version is 0.3.2, uploaded on 2014-10-20. It's conceivable that later versions will either be fixed or broken in different ways (so in the future, the modifications I'm about to describe might be unnecessary, or they might not work).
Unpack the archive (ibm_db_sa-0.3.2.tar.gz) and copy the enclosed ibm_db_sa directory into the sqlalchemy\dialects directory.
Modify sqlalchemy\dialects\ibm_db_sa\pyodbc.py
Add the initialize() method to the AS400Dialect_pyodbc class
The point of this is to override the method of the same name in DB2Dialect, which AS400Dialect_pyodbc inherits from. The problem is that DB2Dialect.initialize() tries to set attributes dbms_ver and dbms_name, neither of which is available or relevant when connecting to IBM i using PyODBC (as far as I can tell).
Add the module-level name dialect and set it to the AS400Dialect_pyodbc class
Code for the above modifications should go at the end of the file, and look like this:
def initialize(self, connection):
super(DB2Dialect, self).initialize(connection)
dialect = AS400Dialect_pyodbc
Note the indentation! Remember, the initialize() method needs to belong to the AS400Dialect_pyodbc class, and dialect needs to be global to the module.
Finally, you need to give the engine creator the right URL:
'ibm_db_sa+pyodbc://username:password#host/*local'
(Obviously, substitute valid values for username, password, and host.)
That's it. At this point, you should be able to create the engine, connect to the i, and execute plain SQL through SQLAlchemy. I would think a lot of the ORM stuff should also work at this point, but I have not verified this.
The way to find out what port is needed is to look at the service table entries on the IBM i.
Your IBM i guy can use the iNav GUI or the green screen Work with Service Table Entry (WRKSRVTBLE) command
Should get a screen like so:
Service Port Protocol
as-admin-http 2001 tcp
as-admin-http 2001 udp
as-admin-https 2010 tcp
as-admin-https 2010 udp
as-central 8470 tcp
as-central-s 9470 tcp
as-database 8471 tcp
as-database-s 9471 tcp
drda 446 tcp
drda 446 udp
The default port for the DB is indeed 8471. Though drda is used for "distributed db" operations.
Based upon this thread, to use ibm_db to connect to DB2 on an IBM i, you need the IBM Connect product; which is a commercial package that has to be paid for.
This thread suggests using ODBC via the pyodbc module. It also suggests that JDBC via the JT400 toolkit may also work.
Here is an example to work with as400, sqlalchemy and pandas.
This exammple take a bunch of csv files and insert with pandas/sqlalchemy.
Only works for windows, on linux the i series odbc driver segfaults (Centos 7 and Debian 9 x68_64)
Client is Windows 10.
My as400 version is 7.3
Python is 2.7.14
installed with pip: pandas, pyodbc, imb_db_sa, sqlalchemy
You need to install i access for windows from ftp://public.dhe.ibm.com/as400/products/clientaccess/win32/v7r1m0/servicepack/si66062/
Aditionally the modifications by #JohnY on pyodbc.py
C:\Python27\Lib\site-packages\sqlalchemy\dialects\ibm_db_sa\pyodbc.py
Change line 99 to
pyodbc_driver_name = "IBM i Access ODBC Driver"
The odbc driver changed it's name.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import glob
csvfiles=(glob.glob("c:/Users/nahum/Documents/OUT/*.csv"))
df_csvfiles = pd.DataFrame(csvfiles)
for index, row in df_csvfiles.iterrows():
datastore2=pd.read_csv(str(row[0]), delimiter=',', header=[0],skipfooter=3)
engine = create_engine('ibm_db_sa+pyodbc://DB2_USER:PASSWORD#IP_SERVER/*local')
datastore2.to_sql('table', engine, schema='SCHEMA', chunksize=1000, if_exists='append', index=False)
Hope it helps.
If you don't need Pandas/SQLAlchemy, just use pyodbc as suggested in John Y's answer. Otherwise, you can try doing what worked for me, below. It's taken from my answer to my own, similar question, which you can check out for more detail on what doesn't work (I tried and failed in so many ways before getting it working).
I created a blank file in my project to appease this message that I was receiving:
Unable to open 'hashtable_class_helper.pxi': File not found
(file:///c:/git/dashboards/pandas/_libs/hashtable_class_helper.pxi).
(My project folder is C:/Git/dashboards, so I created the rest of the path.)
With that file present, the code below now works for me. For the record, it seems to work regardless of whether the ibm_db_sa module is modified as suggested in John Y's answer, so I would recommend leaving that module alone. Note that although they aren't imported by directly, you need these modules installed: pyodbc, ibm_db_sa, and possibly future (if using Python 2...I forget if it's necessary). If you are using Python 3, I you'll need urllib.parse instead of urllib. I also have i Access 7.1 drivers installed on my computer, which probably came into play.
import urllib
import pandas as pd
from sqlalchemy import create_engine
CONNECTION_STRING = (
"driver={iSeries Access ODBC Driver};"
"system=ip_address;"
"database=database_name;"
"uid=username;"
"pwd=password;"
)
SQL= "SELECT..."
quoted = urllib.quote_plus(CONNECTION_STRING)
engine = create_engine('ibm_db_sa+pyodbc:///?odbc_connect={}'.format(quoted))
df = pd.read_sql_query(
SQL,
engine,
index_col='some column'
)
print df

Client-side pyodbc error: "Server does not exist or access denied."

I have a python application designed to pull data from a remote database server using pyodbc, then organize and display the data in a spreadsheet. I've had it working fine for several months now, with multiple coworkers in my department using it through a shared network folder.
My connection:
pyodbc.connect('DRIVER={SQL Server};
SERVER=<myServer_name>;
DATABASE=<myDB_name>;
UID=personsUser;
PWD=personsPassword')
A different employee within our same network recently tried to use the program and got this error:
pyodbc.Error: ('08001','[08001][Microsoft][ODBC SQL Server Driver]
[DBNETLIB]SQL Server does not exist or access denied. (17) (SQLDriverConnect)')
It looked like a simple permissions issue to me, so to confirm I replaced the userID and password with my own hardcoded in, but it gave the same error. Furthermore the same employee can log in and execute queries through SQL Server Management Studio without issue.
Since everyone else in our department can still use the application fine, I know it must be a client-side issue, but I just can't pinpoint the problem. Any input would be greatly appreciated, Thanks!
Updates:
Per flipperPA's answer below, I updated my connection string to include the port:
con = pyodbc.connect('''DRIVER={SQL Server};
SERVER=<myServer_name>;
PORT=1433;
DATABASE=<myDB_name>;
UID=personsUser;
PWD=personsPassword;''')
Unfortunately we still got the same error.
He is running 32-bit Windows 7 on an HP machine, the same setup as the rest of the group so it shouldn't to be an os-level issue.
He does operate SSMS on the same machine, but I ran through the telnet check just be sure - no issue there.
I've taught myself the pyodbc API and basic SQL, but I'm still relatively new to the underlying concepts of databases and remote connections. Could you explain the TDS driver a little more?
When including SERVER, I've found you often need to include the PORT as well; this is the most likely problem:
pyodbc.connect('DRIVER={SQL Server};
SERVER=<myServer_name>;
PORT=1433;
DATABASE=<myDB_name>;
UID=personsUser;
PWD=personsPassword')
I connect mostly from Linux, however. Could it be the other person is connecting from Mac OS/X or Linux? If so, they'll need to use the FreeTDS driver (MS provides one as well, but it is flaky, at best). If you continue to have problems, from the coworkers machine, make sure you can connect from the machine you're having issues with (unless its the same machine they can connect SSMS from):
telnet <myServer_name> 1433
If it connects, you're good, if it hangs on connecting, you're most likely looking at a firewall issue. Good luck!
After talking with a knowledgeable friend I was finally able to figure out my issue!
For some reason, the user's system was configured to connect using named pipes, but the server I was connecting to only had TCP/IP protocol enabled. The solution was to force the application to use TCP/IP by adding "tcp:" to the front of the server name.
The fixed connection string:
pyodbc.connect('''DRIVER={SQL Server};
SERVER=tcp:<myServer_name>;
PORT=1433;
DATABASE=<myDB_name>;
UID=personsUser;
PWD=personsPassword
''')
If for any of you still doesn't work you can try to refer the Localdb (if that's the case) by its pipe address.
If the DB name is LocalDBA, in cmd type
SqlLocalDB LocalDBA v
Get the instance pipe name and then put it on the server name:
conn_str = (
r'DRIVER={SQL Server};'
r'SERVER=np:\\.\pipe\LOCALDB#ECE5B7EE\tsql\query;'
r'PORT=1433;'
r'DATABASE=VT-DE;'
r'trusted_connection=yes;'
)

Unknown database error after creating it (sqlalchemy)

I'm getting the next error while trying to run my Flask app:
sqlalchemy.exc.OperationalError
OperationalError: (_mysql_exceptions.OperationalError) (1049, "Unknown database '/home/gerardo/Documentos/python_web_dev/flask-intro/app2.db'")
so it seems like there is no database.. but I ran the next scripts using sqlalchemy_utils and everything was ok:
engine = create_engine("mysql://root:#localhost/home/gerardo/Documentos/python_web_dev/flask-intro/app2.db")
create_database(engine.url)
but still I get the error..
You have confused MySQL, a database server, with SQLite, a database in a file. You created a SQLite file, but are trying to tell MySQL to connect to it, which makes no sense.
Use the sqlite dialect in the connection string.
You can avoid typing the whole path (and tying the app to that path) by pointing to the file relative to the app's location.
import os
db_path = os.path.realpath(os.path.join(os.path.dirname(__file__), 'app2.db'))
engine = create_engine('sqlite://{}'.format(db_path))
Consider using Flask-SQLAlchemy rather than trying to manage the database yourself.

Categories

Resources