So I´m trying to do a simple connection from Python to SAP HANA with SQLAlchemy but I´m gettin this error:
While my code looks like this:
from sqlalchemy import create_engine, select, Column, Integer, String, Float, Sequence, text
from sqlalchemy.orm import declarative_base, Session, sessionmaker
engine = engine = create_engine('hana+hdbcli:///username:password#host/tenant_db_name', echo=True, future=True)
print("connected")
with engine.connect() as conn:
result = conn.execute(text("select 'hello world'"))
print(result.all())
The error is giving me is correct, I do not have my tenant database in the 30013, I have it in 32015.
How do I fix this?
You can give the port directly in the connection string. The connection string, that I am using, typically looks something like this:
connection_string = 'hana://%s:%s#%s:%s' % (hdb_user, hdb_password, hdb_host, hdb_port)
You can find usage examples in this and this Jupyter Notebook. Further information can be found in the documentation of the SQLAlchemy HANA dialect.
I actually managed to connect succesfully later that day and this is my structure (I´m using a tenant db):
db_connection = "hana+hdbcli://Username:Password#Host:port/tennat_db_name"
Thanks in regard!
Related
I'm trying to connect to a postgres db using SQL Alchemy and the pg8000 driver. I'd like to specify a search path for this connection. With the Psycopg driver, I could do this by doing something like
engine = create_engine(
'postgresql+psycopg2://dbuser#dbhost:5432/dbname',
connect_args={'options': '-csearch_path={}'.format(dbschema)})
However, this does not work for the pg8000 driver. Is there a good way to do this?
You can use pg8000 pretty much in the same way as psycopg2, just need to swap scheme from postgresql+psycopg2 to postgresql+pg8000.
The full connection string definition is in the SQLAlchemy pg8000 docs:
postgresql+pg8000://user:password#host:port/dbname[?key=value&key=value...]
But while psycopg2.connect will pass kwargs to the server (like options and its content), pg8000.connect will not, so there is no setting search_path with pg8000.
The SQLAlchemy docs describe how to do this. For example:
from sqlalchemy import create_engine, event, text
engine = create_engine("postgresql+pg8000://postgres:postgres#localhost/postgres")
#event.listens_for(engine, "connect", insert=True)
def set_search_path(dbapi_connection, connection_record):
existing_autocommit = dbapi_connection.autocommit
dbapi_connection.autocommit = True
cursor = dbapi_connection.cursor()
cursor.execute("SET SESSION search_path='myschema'")
cursor.close()
dbapi_connection.autocommit = existing_autocommit
with engine.connect() as connection:
result = connection.execute(text("SHOW search_path"))
for row in result:
print(row)
However, as it says in the docs:
SQLAlchemy is generally organized around the concept of keeping this
variable at its default value of public
I'm trying to connect to remote informix DB as follows using python3 sqlalchemy but it fails to connect
sqlalchemy.create_engine("informix://usr1:pwd1#XXX:23300/DB_NAME;SERVER=dsinfmx").connect()
I get the below ERROR while connecting.
sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:informix
Can someone please provide some help on this.. From Dbeaver, DB server is accessible.
I assume you are using Informix Python drivers. If not please install Informix Python driver i.e IfxPy. Details to install Informix Python drivers are at this link https://github.com/OpenInformix/IfxPy/blob/master/README.md
Try out below code.
from sqlalchemy import create_engine
from sqlalchemy.dialects import registry
from sqlalchemy.orm import sessionmaker
registry.register("informix", "IfxAlchemy.IfxPy", "IfxDialect_IfxPy")
registry.register("informix.IfxPy", "IfxAlchemy.IfxPy", "IfxDialect_IfxPy")
registry.register("informix.pyodbc", "IfxAlchemy.pyodbc", "IfxDialect_pyodbc")
from sqlalchemy import Table, Column, Integer
ConStr = 'informix://<username>:<password>#<machine name>:<port number>/<database name>;SERVER=<server name>'
engine = create_engine(ConStr)
connection = engine.connect()
connection.close()
print( "Done2" )
Let's say I have the following connection information for a MSSQL server:
'Driver={SQL Server};'
'Server=VCAB18RPACRGZ12\GNRSRZ11,1414;'
'Database=sampleDB;'
'uid=sampleID;'
'pwd=samplePW'
I want to write a python dataframe to the MSSQL server as a table. I have the following code:
from sqlalchemy import create_engine
connection = create_engine('mssql+pyodbc://sampleID:samplePW#myhost:VCAB18RPACRGZ12\GNRSRZ11,1414/sampleDB?driver=SQL+Server+Native+Client+10.0')
My above connection code is erroring out. I'm not sure exactly where my connection information is supposed to go in the create_engine statement.
This is my error ...
ValueError: invalid literal for int() with base 10:
'VCAB18RPACRGZ12\GNRSRZ11,1414'
Your Server Address is not correct.
If 1414 is the port#, you should use ":" instead of ",".
The SQLAlchemy uses pyodbc as the default DBAPI. pymssql is also available.
Below is the connection string sample:
# pyodbc -DSN
engine = create_engine('mssql+pyodbc://scott:tiger#mydsn')
# pymssql
engine = create_engine('mssql+pymssql://scott:tiger#hostname:port/dbname')
# pyodbc -DSN Less connection
from sqlalchemy import create_engine
#assumes driver name=[SQL+Server+Native+Client+10.0]
#engine = create_engine('mssql+pyodbc://username:password#hostname:port/databasename?driver=SQL+Server+Native+Client+10.0')
engine = create_engine(r'mssql+pyodbc://sampleID:samplePW#VCAB18RPACRGZ12\GNRSRZ11:1414/sampleDB?driver=SQL+Server+Native+Client+10.0')
print engine
In Amazon Redshift's Getting Started Guide, it's mentioned that you can utilize SQL client tools that are compatible with PostgreSQL to connect to your Amazon Redshift Cluster.
In the tutorial, they utilize SQL Workbench/J client, but I'd like to utilize python (in particular SQLAlchemy). I've found a related question, but the issue is that it does not go into the detail or the python script that connects to the Redshift Cluster.
I've been able to connect to the cluster via SQL Workbench/J, since I have the JDBC URL, as well as my username and password, but I'm not sure how to connect with SQLAlchemy.
Based on this documentation, I've tried the following:
from sqlalchemy import create_engine
engine = create_engine('jdbc:redshift://shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy')
ERROR:
Could not parse rfc1738 URL from string 'jdbc:redshift://shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy'
I don't think SQL Alchemy "natively" knows about Redshift. You need to change the JDBC "URL" string to use postgres.
jdbc:postgres://shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy
Alternatively, you may want to try using sqlalchemy-redshift using the instructions they provide.
I was running into the exact same issue, and then I remembered to include my Redshift credentials:
eng = create_engine('postgresql://[LOGIN]:[PASSWORD]#shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy')
sqlalchemy-redshift is works for me, but after few days of reserch
packages (python3.4):
SQLAlchemy==1.0.14 sqlalchemy-redshift==0.5.0 psycopg2==2.6.2
First of all, I checked, that my query is working workbench (http://www.sql-workbench.net), then I force it work in sqlalchemy (this https://stackoverflow.com/a/33438115/2837890 helps to know that auto_commit or session.commit() must be):
db_credentials = (
'redshift+psycopg2://{p[redshift_user]}:{p[redshift_password]}#{p[redshift_host]}:{p[redshift_port]}/{p[redshift_database]}'
.format(p=config['Amazon_Redshift_parameters']))
engine = create_engine(db_credentials, connect_args={'sslmode': 'prefer'})
connection = engine.connect()
result = connection.execute(text(
"COPY assets FROM 's3://xx/xx/hello.csv' WITH CREDENTIALS "
"'aws_access_key_id=xxx_id;aws_secret_access_key=xxx'"
" FORMAT csv DELIMITER ',' IGNOREHEADER 1 ENCODING UTF8;").execution_options(autocommit=True))
result = connection.execute("select * from assets;")
print(result, type(result))
print(result.rowcount)
connection.close()
And after that, I forced to work sqlalchemy_redshift CopyCommand perhaps bad way, looks little tricky:
import sqlalchemy as sa
tbl2 = sa.Table(TableAssets, sa.MetaData())
copy = dialect_rs.CopyCommand(
assets,
data_location='s3://xx/xx/hello.csv',
access_key_id=access_key_id,
secret_access_key=secret_access_key,
truncate_columns=True,
delimiter=',',
format='CSV',
ignore_header=1,
# empty_as_null=True,
# blanks_as_null=True,
)
print(str(copy.compile(dialect=RedshiftDialect(), compile_kwargs={'literal_binds': True})))
print(dir(copy))
connection = engine.connect()
connection.execute(copy.execution_options(autocommit=True))
connection.close()
We make just that I made with sqlalchemy, excute query, except comine query by CopyCommand. I have not see some profit :(.
The following works for me with Databricks on all kinds of SQLs
import sqlalchemy as SA
import psycopg2
host = 'your_host_url'
username = 'your_user'
password = 'your_passw'
port = 5439
url = "{d}+{driver}://{u}:{p}#{h}:{port}/{db}".\
format(d="redshift",
driver='psycopg2',
u=username,
p=password,
h=host,
port=port,
db=db)
engine = SA.create_engine(url)
cnn = engine.connect()
strSQL = "your_SQL ..."
try:
cnn.execute(strSQL)
except:
raise
import sqlalchemy as db
engine = db.create_engine('postgres://username:password#url:5439/db_name')
This worked for me
I am connecting to a sybase ASE 15 database from Python 3.4 using pyodbc and executing a stored procedure.
All works as expected if I use native pyodbc:
import pd
import pyodbc
con = pyodbc.connect('DSN=dsn_name;UID=username;PWD=password', autocommit=True)
df = pd.read_sql("exec p_procecure #GroupName='GROUP'", con)
[Driver is Adaptive Server Enterprise].
I have to have autocommit=True and if I do no I get the following error:
DatabaseError: Execution failed on sql 'exec ....': ('ZZZZZ', "[ZZZZZ]
[SAP][ASE ODBC Driver][Adaptive Server Enterprise]Stored procedure
'p_procedure' may be run only in unchained transaction mode. The 'SET
CHAINED OFF' command will cause the current session to use unchained
transaction mode.\n (7713) (SQLExecDirectW)")
I attempt to achieve the same using SQLAlchemy (1.0.9):
from sqlalchemy import create_engine, engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.sql import text
url = r'sybase+pyodbc://username:password#dsn'
engine = create_engine(url, echo=True)
sess = sessionmaker(bind=engine).Session()
df = pd.read_sql(text("exec p_procedure #GroupName='GROUP'"),conn.execution_options(autocommit=True))
The error message is the same despite the fact I have specified autocommit=True on the connection. (I have also tested this at the session level but should not be necessary and made no difference).
DBAPIError: (pyodbc.Error) ('ZZZZZ', "[ZZZZZ] [SAP][ASE ODBC
Driver][Adaptive Server Enterprise]....
Can you see anything wrong here?
As always, any help would be much appreciated.
Passing the autocommit=True argument as an item in the connect_args argument dictionary does work:
connect_args = {'autocommit': True}
create_engine(url, connect_args=connect_args)
connect_args – a dictionary of options which will be passed directly
to the DBAPI’s connect() method as additional keyword arguments.
I had some problems with autocommit option. The only thing that worked for me was to change this option to True after establishing connection.
ConnString = 'Driver=%SQL_DRIVER%;Server=%SQL_SERVER%;Uid=%SQL_LOGIN%;Pwd=%SQL_PASSWORD%;'
SQL_CONNECTION = pyodbc.connect(ConnString)
SQL_CONNECTION.autocommit = True