Accessing SQL data using Google Cloud Postgres for App Engine (python) - python

I have a google cloud Postgres database and I can query it no problem in a jupyter notebook. I created a connection to the database using psycopg2. However, I am building a web app with flask and it seems that I cannot use psycopg2 in the gcloud run deploy. I've read the documentation on pg8000 but I am very confused. I am very new to this.
What is the simplest function to create a connection to the database so I can query the data?
This is what I was using with psycopg2:
#establishing the connection
conn = psycopg2.connect(
database='postgres', user='postgres', password='password', host='Public IP Address', port='5432')
Then I could fetch data using:
cursor = conn.cursor()
cursor.execute('SELECT STMT')
result = cursor.fetchall()
Any help or direction is much appreciated.

Related

How to connect to sqlite3 db in python with user, password and host name

I am beginning with sqlite3 in Python.
I know how to connect to local database. See simple example below:
import sqlite3
connection = sqlite3.connect('mydb.db')
cursor = connection.cursor()
But now I would like to send data into a database that is running on a server. I have following config information:
[db-my_db]
user=my_user
password=my_pass
host=db.my_db.com
database=my_db.com
How can I connect to this database? Thanks

Cannot connect to GearHost database

I am trying to connect to my GearHost Database in python, I followed the instructions here on GearHost. My python code looks like:
from mysql.connector import connection
server = 'den1.mssql6.gear.host'
db = 'testmenow'
user = 'testmenow'
psword = 'TEST!!'
cnxn = connection.MySQLConnection(host=server, user=usr, password=psword, database=db)
cursor = cnxn.cursor()
cnxn.close()
I get the following error:
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on 'den1.mssql6.gear.host:3306' (10061 No connection could be made because the target machine actively refused it)
I have also tried to connect to GearHost through mysql workbench, as instructed in GearHost's instruction page, which also cannot connect to the database:
Connecting to MySQL server ...
Can't connect to MySQL server on 'den1.mssql6.gear.host' (10060)
This is what my GearHost shows:
Tried (unsuccessfully)
Messing with and without a port number, as mentioned here: Issue connecting to gearhost database
Trying connecting through MySQL workbench
Considering the conversation here: Cannot Connect to Database Server mysql workbench , but does not seem applicable to my circumstances.
Different SQL packages in python, including mysql-python and mysql-connector-python
Question
Am I missing something obvious that is preventing me from connecting to GearHost, either with python or mysql workbench?
UPDATE
as #SMor pointed out, I had mistaken MSSQL for MySQL - I was able to successfully connect to my database using:
import pyodbc
server = 'den1.mssql6.gear.host'
db = 'testmenow'
usr = 'testmenow'
psword = 'TEST!!'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server};SERVER='+server+';'
'DATABASE='+db+';UID='+usr+';PWD=' + psword)
cursor = cnxn.cursor()

SQL Server connection - Works in pyodbc, but not SQLAlchemy

This is a fairly common question but even using the answers on SO like here but I still can't connect.
When I setup my connection to pyodbc I can connect with the following:
cnxn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=ip,port;DATABASE=db;UID=user;PWD=pass')
cursor = cnxn.cursor()
cursor.execute("some select query")
for row in cursor.fetchall():
print(row)
and it works.
However to do a .read_sql() in pandas I need to connect with sqlalchemy.
I have tried with both hosted connections and pass-through pyodbc connections like the below:
quoted = urllib.parse.quote_plus('DRIVER={SQL Server Native Client 11.0};Server=ip;Database=db;UID=user;PWD=pass;Port=port;')
engine = sqlalchemy.create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted))
engine.connect()
I have tried with both SERVER=ip,port format and the separate Port=port parameter like above but still no luck.
The error I'm getting is Login failed for user 'user'. (18456)
Any help is much appreciated.
I assume that you want to create a DataFrame so when you have a cnxn you can pass it to Pandas read_sql_query function.
Example:
cnxn = pyodbc.connect('your connection string')
query = 'some query'
df = pandas.read_sql_query(query, conn)

Specify Failover Partner with SQL Alchemy and pymssql

Does anyone know how to connect to connect to a SQL Server from SQL Alchemy and pymssql specifying Failover Partner? Trying to follow the SQL Alchemy documentation but cannot find failover option.
How can I add Failover Partner=myfailover;Initial Catalog=mydb to this code?
from sqlalchemy import create_engine
conn_str = "mssql+pymssql://{user}:{pwd}#{host}/{db}".format(host=SERVER, db=DATABASE, user=USERNAME, pwd=PASSWORD)
engine = create_engine(conn_str)
From the documentation it doesn't support failover in the API.
Instead you could try adding a try and except block.
try:
#connect to first partner
except:
# connect to failover

How to connect to a cluster in Amazon Redshift using SQLAlchemy?

In Amazon Redshift's Getting Started Guide, it's mentioned that you can utilize SQL client tools that are compatible with PostgreSQL to connect to your Amazon Redshift Cluster.
In the tutorial, they utilize SQL Workbench/J client, but I'd like to utilize python (in particular SQLAlchemy). I've found a related question, but the issue is that it does not go into the detail or the python script that connects to the Redshift Cluster.
I've been able to connect to the cluster via SQL Workbench/J, since I have the JDBC URL, as well as my username and password, but I'm not sure how to connect with SQLAlchemy.
Based on this documentation, I've tried the following:
from sqlalchemy import create_engine
engine = create_engine('jdbc:redshift://shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy')
ERROR:
Could not parse rfc1738 URL from string 'jdbc:redshift://shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy'
I don't think SQL Alchemy "natively" knows about Redshift. You need to change the JDBC "URL" string to use postgres.
jdbc:postgres://shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy
Alternatively, you may want to try using sqlalchemy-redshift using the instructions they provide.
I was running into the exact same issue, and then I remembered to include my Redshift credentials:
eng = create_engine('postgresql://[LOGIN]:[PASSWORD]#shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy')
sqlalchemy-redshift is works for me, but after few days of reserch
packages (python3.4):
SQLAlchemy==1.0.14 sqlalchemy-redshift==0.5.0 psycopg2==2.6.2
First of all, I checked, that my query is working workbench (http://www.sql-workbench.net), then I force it work in sqlalchemy (this https://stackoverflow.com/a/33438115/2837890 helps to know that auto_commit or session.commit() must be):
db_credentials = (
'redshift+psycopg2://{p[redshift_user]}:{p[redshift_password]}#{p[redshift_host]}:{p[redshift_port]}/{p[redshift_database]}'
.format(p=config['Amazon_Redshift_parameters']))
engine = create_engine(db_credentials, connect_args={'sslmode': 'prefer'})
connection = engine.connect()
result = connection.execute(text(
"COPY assets FROM 's3://xx/xx/hello.csv' WITH CREDENTIALS "
"'aws_access_key_id=xxx_id;aws_secret_access_key=xxx'"
" FORMAT csv DELIMITER ',' IGNOREHEADER 1 ENCODING UTF8;").execution_options(autocommit=True))
result = connection.execute("select * from assets;")
print(result, type(result))
print(result.rowcount)
connection.close()
And after that, I forced to work sqlalchemy_redshift CopyCommand perhaps bad way, looks little tricky:
import sqlalchemy as sa
tbl2 = sa.Table(TableAssets, sa.MetaData())
copy = dialect_rs.CopyCommand(
assets,
data_location='s3://xx/xx/hello.csv',
access_key_id=access_key_id,
secret_access_key=secret_access_key,
truncate_columns=True,
delimiter=',',
format='CSV',
ignore_header=1,
# empty_as_null=True,
# blanks_as_null=True,
)
print(str(copy.compile(dialect=RedshiftDialect(), compile_kwargs={'literal_binds': True})))
print(dir(copy))
connection = engine.connect()
connection.execute(copy.execution_options(autocommit=True))
connection.close()
We make just that I made with sqlalchemy, excute query, except comine query by CopyCommand. I have not see some profit :(.
The following works for me with Databricks on all kinds of SQLs
import sqlalchemy as SA
import psycopg2
host = 'your_host_url'
username = 'your_user'
password = 'your_passw'
port = 5439
url = "{d}+{driver}://{u}:{p}#{h}:{port}/{db}".\
format(d="redshift",
driver='psycopg2',
u=username,
p=password,
h=host,
port=port,
db=db)
engine = SA.create_engine(url)
cnn = engine.connect()
strSQL = "your_SQL ..."
try:
cnn.execute(strSQL)
except:
raise
import sqlalchemy as db
engine = db.create_engine('postgres://username:password#url:5439/db_name')
This worked for me

Categories

Resources