I'm trying to talk to an AS400 in Python. The goal is to use SQLAlchemy, but when I couldn't get that to work I stepped back to a more basic script using just ibm_db instead of ibm_db_sa.
import ibm_db
dbConnection = ibm_db.pconnect("DATABASE=myLibrary;HOSTNAME=1.2.3.4;PORT=8471;PROTOCOL=TCPIP;UID=username;PWD=password", "", "") #this line is where it hangs
print ibm_db.conn_errormsg()
The problem seems to be the port. If I use the 50000 I see in all the examples, I get an error. If I use 446, I get an error. The baffling part is this: if I use 8471, which IBM says to do, I get no error, no timeout, no response whatsoever. I've left the script running for over twenty minutes, and it just sits there, doing nothing. It's active, because I can't use the command prompt at all, but it never gives me any feedback of any kind.
This same 400 is used by the company I work for every day, for logging, emailing, and (a great deal of) database usage, so I know it works. The software we use, which talks to the database behind the scenes, runs just fine on my machine. That tells me my driver is good, the network settings are right, and so on. I can even telnet into the 400 from here.
I'm on the SQLAlchemy and ibm_db email lists, and have been communicating with them for days about this problem. I've also googled it so much I'm starting to run out of un-visited links in my search results. No one seems to have the problem of the connection hanging indefinitely. If there's anything I can try in Python, I'll try it. I don't deal with the 400 directly, but I can ask the guy who does to check/configure whatever I need to. As I said though, several workstations can talk to the 400's database with no problems, and queries run against the library I want to access work fine, if run from the 400 itself. If anyone has any suggestions, I'd greatly appreciate hearing them. Thanks!
The README for ibm_db_sa only lists DB2 for Linux/Unix/Windows in the "Supported Database" section. So it most likely doesn't work for DB2 for i, at least not right out of the box.
Since you've stated you have IBM System i Access for Windows, I strongly recommend just using one of the drivers that comes with it (ODBC, OLEDB, or ADO.NET, as #Charles mentioned).
Personally, I always use ODBC, with either pyodbc or pypyodbc. Either one works fine. A simple example:
import pyodbc
connection = pyodbc.connect(
driver='{iSeries Access ODBC Driver}',
system='11.22.33.44',
uid='username',
pwd='password')
c1 = connection.cursor()
c1.execute('select * from qsys2.sysschemas')
for row in c1:
print row
Now, one of SQLAlchemy's connection methods is pyodbc, so I would think that if you can establish a connection using pyodbc directly, you can somehow configure SQLAlchemy to do the same. But I'm not an SQLAlchemy user myself, so I don't have example code for that.
UPDATE
I managed to get SQLAlchemy to connect to our IBM i and execute straight SQL queries. In other words, to get it to about the same functionality as simply using PyODBC directly. I haven't tested any other SQLAlchemy features. What I did to set up the connection on my Windows 7 machine:
Install ibm_db_sa as an SQLAlchemy dialect
You may be able to use pip for this, but I did it the low-tech way:
Download ibm_db_sa from PyPI.
As of this writing, the latest version is 0.3.2, uploaded on 2014-10-20. It's conceivable that later versions will either be fixed or broken in different ways (so in the future, the modifications I'm about to describe might be unnecessary, or they might not work).
Unpack the archive (ibm_db_sa-0.3.2.tar.gz) and copy the enclosed ibm_db_sa directory into the sqlalchemy\dialects directory.
Modify sqlalchemy\dialects\ibm_db_sa\pyodbc.py
Add the initialize() method to the AS400Dialect_pyodbc class
The point of this is to override the method of the same name in DB2Dialect, which AS400Dialect_pyodbc inherits from. The problem is that DB2Dialect.initialize() tries to set attributes dbms_ver and dbms_name, neither of which is available or relevant when connecting to IBM i using PyODBC (as far as I can tell).
Add the module-level name dialect and set it to the AS400Dialect_pyodbc class
Code for the above modifications should go at the end of the file, and look like this:
def initialize(self, connection):
super(DB2Dialect, self).initialize(connection)
dialect = AS400Dialect_pyodbc
Note the indentation! Remember, the initialize() method needs to belong to the AS400Dialect_pyodbc class, and dialect needs to be global to the module.
Finally, you need to give the engine creator the right URL:
'ibm_db_sa+pyodbc://username:password#host/*local'
(Obviously, substitute valid values for username, password, and host.)
That's it. At this point, you should be able to create the engine, connect to the i, and execute plain SQL through SQLAlchemy. I would think a lot of the ORM stuff should also work at this point, but I have not verified this.
The way to find out what port is needed is to look at the service table entries on the IBM i.
Your IBM i guy can use the iNav GUI or the green screen Work with Service Table Entry (WRKSRVTBLE) command
Should get a screen like so:
Service Port Protocol
as-admin-http 2001 tcp
as-admin-http 2001 udp
as-admin-https 2010 tcp
as-admin-https 2010 udp
as-central 8470 tcp
as-central-s 9470 tcp
as-database 8471 tcp
as-database-s 9471 tcp
drda 446 tcp
drda 446 udp
The default port for the DB is indeed 8471. Though drda is used for "distributed db" operations.
Based upon this thread, to use ibm_db to connect to DB2 on an IBM i, you need the IBM Connect product; which is a commercial package that has to be paid for.
This thread suggests using ODBC via the pyodbc module. It also suggests that JDBC via the JT400 toolkit may also work.
Here is an example to work with as400, sqlalchemy and pandas.
This exammple take a bunch of csv files and insert with pandas/sqlalchemy.
Only works for windows, on linux the i series odbc driver segfaults (Centos 7 and Debian 9 x68_64)
Client is Windows 10.
My as400 version is 7.3
Python is 2.7.14
installed with pip: pandas, pyodbc, imb_db_sa, sqlalchemy
You need to install i access for windows from ftp://public.dhe.ibm.com/as400/products/clientaccess/win32/v7r1m0/servicepack/si66062/
Aditionally the modifications by #JohnY on pyodbc.py
C:\Python27\Lib\site-packages\sqlalchemy\dialects\ibm_db_sa\pyodbc.py
Change line 99 to
pyodbc_driver_name = "IBM i Access ODBC Driver"
The odbc driver changed it's name.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import glob
csvfiles=(glob.glob("c:/Users/nahum/Documents/OUT/*.csv"))
df_csvfiles = pd.DataFrame(csvfiles)
for index, row in df_csvfiles.iterrows():
datastore2=pd.read_csv(str(row[0]), delimiter=',', header=[0],skipfooter=3)
engine = create_engine('ibm_db_sa+pyodbc://DB2_USER:PASSWORD#IP_SERVER/*local')
datastore2.to_sql('table', engine, schema='SCHEMA', chunksize=1000, if_exists='append', index=False)
Hope it helps.
If you don't need Pandas/SQLAlchemy, just use pyodbc as suggested in John Y's answer. Otherwise, you can try doing what worked for me, below. It's taken from my answer to my own, similar question, which you can check out for more detail on what doesn't work (I tried and failed in so many ways before getting it working).
I created a blank file in my project to appease this message that I was receiving:
Unable to open 'hashtable_class_helper.pxi': File not found
(file:///c:/git/dashboards/pandas/_libs/hashtable_class_helper.pxi).
(My project folder is C:/Git/dashboards, so I created the rest of the path.)
With that file present, the code below now works for me. For the record, it seems to work regardless of whether the ibm_db_sa module is modified as suggested in John Y's answer, so I would recommend leaving that module alone. Note that although they aren't imported by directly, you need these modules installed: pyodbc, ibm_db_sa, and possibly future (if using Python 2...I forget if it's necessary). If you are using Python 3, I you'll need urllib.parse instead of urllib. I also have i Access 7.1 drivers installed on my computer, which probably came into play.
import urllib
import pandas as pd
from sqlalchemy import create_engine
CONNECTION_STRING = (
"driver={iSeries Access ODBC Driver};"
"system=ip_address;"
"database=database_name;"
"uid=username;"
"pwd=password;"
)
SQL= "SELECT..."
quoted = urllib.quote_plus(CONNECTION_STRING)
engine = create_engine('ibm_db_sa+pyodbc:///?odbc_connect={}'.format(quoted))
df = pd.read_sql_query(
SQL,
engine,
index_col='some column'
)
print df
Related
Since the announcement about XMLA endpoints, I've been trying to figure out how to connect to a URL of the form powerbi://api.powerbi.com/v1.0/myorg/[workspace name] as an SSAS OLAP cube via Python, but I haven't gotten anything to work.
I have a workspace in a premium capacity and I am able to connect to it using DAX Studio as well as SSMS as explained here, but I haven't figured out how to do it with Python. I've tried installing olap.xmla, but I get the following error when I try to use the Power BI URL as the location using either the powerbi or https as the prefix.
import olap.xmla.xmla as xmla
p = xmla.XMLAProvider()
c = p.connect(location="powerbi://api.powerbi.com/v1.0/myorg/[My Workspace]")
[...]
TransportError: Server returned HTTP status 404 (no content available)
I'm sure there are authentication issues involved, but I'm a bit out of my depth here. Do I need to set up an "app" in ActiveDirectory and use the API somehow? How is authentication handled for this kind of connection?
If anyone knows of any blog posts or other resources that demonstrate how to connect to a Power BI XMLA endpoint specifically using Python, that would be amazing. My searching has failed me, but surely I can't be the only one who is trying to do this.
After #Gigga pointed the connector issue, I went looking for other Python modules that worked with MSOLAP to connect and found one that I got working!
The module is adodbapi (note the pywin32 prerequisite).
Connecting is as simple as this:
import adodbapi
# Connection string
conn = adodbapi.connect("Provider=MSOLAP.8; \
Data Source='powerbi://api.powerbi.com/v1.0/myorg/My Workspace Name'; \
Initial Catalog='My Data Model'")
# Example query
print('The tables in your database are:')
for name in conn.get_table_names():
print(name)
It authenticated using my Windows credentials by popping up a window like this:
I'm not familiar with olap.xmla or using Python to connect to olap cubes, but I think the problem is with the driver (or connector ?) provided in olap.xmla.
In the announcement about XMLA endpoints page, it says that the connection only works with SSMS 18.0 RC1 or later, which is quite new. Same thing with DAX studio, the version where xmla connection is supported (Version 2.8.2, Feb 3 2019), is quite fresh.
The latest version of olap.xmla seems to be from august 2013, so it's possible that there's some Microsoft magic behind the PowerBI XMLA connection and that's why it doesn't work with older connectors.
They now have a REST endpoint via which you can execute DAX queries. This could be easier than trying to invoke the XMLA endpoint directly.
https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/execute-queries
I'm apparently struggling to establish a connection to a paradox database with sqlalchemy, since the dialect seems not to be featured... Yeah I know paradox is outdated, but I need to get it working since my boss runs a own petrol station wich is paradox backed. I got it to work with pypyodbc, which wasn't that much of a struggle since I was into VBA for a couple of years now and things were not that strange to start with. Switching to python made my life much easier with etl pipelines... At this point I'm trying to source data from multiple source for business reporting, where I can apply one module only for etl purposes. Hopefully some of you guys can reach out with some usefull infos concerning this matter.
You might be able to do it. One method of accessing SQL Server is through PyODBC, since you have an ODBC driver for Paradox the same process might work. It's been over a decade since I've used Paradox, but IIRC the ODBC drivers work the same. It might be more work than you want to do though.
You would need to extend SQLalchemy with a new engine. This is the details of the PyODBC driver for SQL Server; you'd need to change the appropriate details, so instead of "mssql+pyodbc you'd have paradox+pyodbc.
This is a similar StackOverflow question which is a good starting point.
If you're passing through ODBC strings, this is an example of an ODBC string for SQL Server.
params = urllib.quote_plus("DRIVER={SQL Server Native Client 10.0};SERVER=dagger;DATABASE=test;UID=user;PWD=password")
and an example of a Paradox connection string (apparently not entirely correct but it explains how to adapt it):
cnxn = pyodbc.connect(r"Driver={{Microsoft Paradox Driver (*.db )}};Fil=Paradox 5.X;DefaultDir={0};Dbq={0}; CollatingSequence=ASCII;")
I no longer have a Paradox database to hand to test this on, so I can't say for sure how much work it would take to get going. If you already have a working ODBC connection then you have half of what you need.
I develop on a Windows machine for a Linux server. I'm using pyodbc to connect to MySQL on Windows and was hoping to use MySQLdb to connect to it on my Linux box. I had thought these both implemented the same API and therefore would be compatible. I was very wrong, and now realize that I'll have to re-write all my code to work on Linux, which will subsequently make it not work on Windows.
Is there another thin abstraction layer that would allow me to write more portable code? I was considering SQLAlchemy, but I'm really just trying to execute SQL statements, so learning an entirely new domain specific language seems cumbersome.
Appreciate any recommendations!
SQLAlchemy allows you to issue statements directly
example from the linked page
connection = engine.connect()
result = connection.execute("select username from users")
for row in result:
print "username:", row['username']
connection.close()
I've looked over Google Cloud SQL's documentation and various searches, but I can't find out whether it is possible to use SQLAlchemy with Google Cloud SQL, and if so, what the connection URI should be.
I'm looking to use the Flask-SQLAlchemy extension and need the connection string like so:
mysql://username:password#server/db
I saw the Django example, but it appears the configuration uses a different style than the connection string. https://developers.google.com/cloud-sql/docs/django
Google Cloud SQL documentation:
https://developers.google.com/cloud-sql/docs/developers_guide_python
Update
Google Cloud SQL now supports direct access, so the MySQLdb dialect can now be used. The recommended connection via the mysql dialect is using the URL format:
mysql+mysqldb://root#/<dbname>?unix_socket=/cloudsql/<projectid>:<instancename>
mysql+gaerdbms has been deprecated in SQLAlchemy since version 1.0
I'm leaving the original answer below in case others still find it helpful.
For those who visit this question later (and don't want to read through all the comments), SQLAlchemy now supports Google Cloud SQL as of version 0.7.8 using the connection string / dialect (see: docs):
mysql+gaerdbms:///<dbname>
E.g.:
create_engine('mysql+gaerdbms:///mydb', connect_args={"instance":"myinstance"})
I have proposed an update to the mysql+gaerdmbs:// dialect to support both of Google Cloud SQL APIs (rdbms_apiproxy and rdbms_googleapi) for connecting to Cloud SQL from a non-Google App Engine production instance (ex. your development workstation). The change will also modify the connection string slightly by including the project and instance as part of the string, and not require being passed separately via connect_args.
E.g.
mysql+gaerdbms:///<dbname>?instance=<project:instance>
This will also make it easier to use Cloud SQL with Flask-SQLAlchemy or other extension where you don't explicitly make the create_engine() call.
If you are having trouble connecting to Google Cloud SQL from your development workstation, you might want to take a look at my answer here - https://stackoverflow.com/a/14287158/191902.
Yes,
If you find any bugs in SA+Cloud SQL, please let me know. I wrote the dialect code that was integrated into SQLAlchemy. There's a bit of silly business about how Cloud SQL bubbles up exceptions, so there might be some loose ends there.
For those who prefer PyMySQL over MySQLdb (which is suggested in the accepted answer), the SQLAlchemy connection strings are:
For Production
mysql+pymysql://<USER>:<PASSWORD>#/<DATABASE_NAME>?unix_socket=/cloudsql/<PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>
Please make sure to
Add the SQL instance to your app.yaml:
beta_settings:
cloud_sql_instances: <PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>
Enable the SQL Admin API as it seems to be necessary:
https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview
For Local Development
mysql+pymysql://<USER>:<PASSWORD>#localhost:3306/<DATABASE_NAME>
given that you started the Cloud SQL Proxy with:
cloud_sql_proxy -instances=<PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>=tcp:3306
it is doable, though I haven't used Flask at all so I'm not sure about establishing the connection through that. I got it working through Pyramid and submitted a patch to SQLAlchemy (possibly to the wrong repo) here:
https://bitbucket.org/sqlalchemy/sqlalchemy/pull-request/2/added-a-dialect-for-google-app-engines
That has since been replaced and accepted into SQLAlchemy as
http://www.sqlalchemy.org/trac/ticket/2484
I don't think it's made it way to a release though.
There are some issues with Google SQL throwing different exceptions so we had issues with things like deploying a database automatically. You also need to disable connection pooling using NullPool as mentioned in the second patch.
We've since moved to using the datastore through NDB so I haven't followed the progess of these fixes for a while..
PostgreSQL, pg8000 and flask_sqlalchemy
Adding information in case someone is on the lookout how to use flask_sqlalchemy with PostgreSQL: Using pg8000 as driver, the working connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>
I'm trying to connect to a Sybase database in Python (using the python-sybase DBAPI and sqlalchemy module), and I'm currently receiving the following error:
ct_connect(): directory service layer: internal directory control layer error: There was an error encountered while binding to the directory service
Here's the code:
import sqlalchemy
connect_url = sqlalchemy.engine.url.URL(drivername='pysybase', username='read_only', password='*****', host='hostname', port=9000, database='tablename', query=None)
db = sqlalchemy.create_engine(connect_url)
connection = db.connect()
I've also tried to connect without sqlalchemy - ie, just importing the Python Sybase module directly and attempting to connect, but I still get the same error.
I've done quite a bit of googling and searching here on SO and at the doc sites for each of the packages I'm using. One common suggestion was to verify the DSN settings, as that's what's causing ct_connect() to trip up, but I am able to connect to and view the database in my locally-installed copy of DBArtisan just fine, and I believe that uses the same DSN.
Perhaps I should attempt to connect in a way without a DSN? Or is there something else I'm missing here?
Any ideas or feedback are much appreciated, thank you!
I figured out the issue for anyone else who might be having a similar problem.
Apparently, even though I had valid entries for the hostname in my sql.ini file and DSN table, Sybase was not reading it correctly - I had to open DSEdit (one of the tools that comes with Sybase) and re-enter the server/hostname info.