AWS Recently launched the Data API. This simplifies creating Lambda functions, eliminating the necessity for additional complexity by allowing API calls, instead of direct database connections.
I'm trying to use SQLAlchemy in an AWS Lambda Function, and I'd really like to take advantage of this new API.
Does anyone know if there is any support for this, or if support for this is coming?
Alternatively, how difficult would it be to create a new Engine to support this?
SQLAlchemy calls database drivers "dialects". So if you're using SQLAlchemy with PostgreSQL and using psycopg2 as the driver, then you're using the psycopg2 dialect of PostgreSQL.
I was looking for the same thing as you, and found no existing solution, so I wrote my own and published it. To use the AWS Aurora RDS Data API, I created a SQL dialect package for it, sqlalchemy-aurora-data-api. This in turn required me to write a DB-API compatible Python DB driver for Aurora Data API, aurora-data-api. After installing with pip install sqlalchemy-aurora-data-api, you can use it like this:
from sqlalchemy import create_engine
cluster_arn = "arn:aws:rds:us-east-1:123456789012:cluster:my-aurora-serverless-cluster"
secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:MY_DB_CREDENTIALS"
engine = create_engine('postgresql+auroradataapi://:#/my_db_name',
echo=True,
connect_args=dict(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn))
with engine.connect() as conn:
for result in conn.execute("select * from pg_catalog.pg_tables"):
print(result)
As an alternative, if you want something more like Records, you can try Camus https://github.com/rizidoro/camus.
Related
I am trying to connect to my free IBM Cloudant instance using SQLAlchemy but I'm unsure of the notation and I keep getting errors.
I've tried the following but it returns the error sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:cloudant.
engine = create_engine(f"cloudant:///?User={USER}&; Password={PASSWORD}", echo=True)
Any advice will be appreciated.
SQLAlchemy does not have built-in support either for IBM Cloudant itself, or for the CouchDB platform on which it is based. The list of external dialects for SQLAlchemy also makes no mention of either database. So it appears that there is no "officially recognized" dialect for IBM Cloudant at the moment.
That said, a web search for "sqlalchemy cloudant" did find this:
https://www.cdata.com/kb/tech/cloudant-python-sqlalchemy.rst
(The above link is for information purposes only. It is not an endorsement.)
Doc says:
Hooks are interfaces to external platforms and databases like Hive, S3, MySQL, Postgres, HDFS, and Pig. Hooks implement a common interface when possible, and act as a building block for operators. Ref
But why do we need them?
I want to select data from one Postgres DB, and store to another one. Can I use, for example, psycopg2 driver inside python script, which runs by a python operator, or airflow should know for some reason what exactly I'm doing inside script, so, I need to use PostgresHook instead of just psycopg2 driver?
You should use just PostresHook. Instead of using psycopg2 as so:
conn = f'{pass}:{server}#host etc}'
cur = conn.cursor()
cur.execute(query)
data = cur.fetchall()
You can just type:
postgres = PostgresHook('connection_id')
data = postgres.get_pandas_df(query)
Which can also make use of encryption of connections.
So using hooks is cleaner, safer and easier.
While it is possible to just hardcode the connections in your script and run it, the power of hooks will allow to edit environment variables from within the UI.
Have a look at "Automate AWS Tasks Thanks to Airflow Hooks" to learn a bit more about how to use hooks.
Our infrastructure group has asked us to "add MultiSubnetFailover=True to all application connection strings" so that we can take advantage of a new SQL Server HA setup involving Availability Groups.
I am stuck though since we have some python programs that connect (read+write) to the database via SQL Alchemy. I have been searching and I don't see anything about this MultiSubnetFailover feature being available as an option in SQL Alchemy or any other Python driver. Is it possible to connect to an HA setup utilizing the SQL Alchemy driver, or even Python, and if so how?
FYI - The link that my infrastructure guy pointed me to is here (http://msdn.microsoft.com/en-us/library/hh205662%28v=vs.110%29.aspx), and as you can see it is specifically about how .NET applications can utilize the "MultiSubnetFailover=True" setting in the connection string among other things.
http://docs.sqlalchemy.org/en/latest/dialects/mssql.html#dialect-mssql-pyodbc-connect
You could use the example towards the end of the documentation's section like this:
import urllib
from sqlalchemy import create_engine
connection_string = '127.0.0.1;Database=MyDatabase;MultiSubnetFailover=True'
engine_string = 'mssql+pyodbc:///?odbc_connect={}'.format(urllib.quote_plus(connection_string))
engine = create_engine(engine_string)
Update from comments
For newer versions of Microsoft ODBC Driver for SQL Server, you may need to use MultiSubnetFailover=Yes instead of True
I've looked over Google Cloud SQL's documentation and various searches, but I can't find out whether it is possible to use SQLAlchemy with Google Cloud SQL, and if so, what the connection URI should be.
I'm looking to use the Flask-SQLAlchemy extension and need the connection string like so:
mysql://username:password#server/db
I saw the Django example, but it appears the configuration uses a different style than the connection string. https://developers.google.com/cloud-sql/docs/django
Google Cloud SQL documentation:
https://developers.google.com/cloud-sql/docs/developers_guide_python
Update
Google Cloud SQL now supports direct access, so the MySQLdb dialect can now be used. The recommended connection via the mysql dialect is using the URL format:
mysql+mysqldb://root#/<dbname>?unix_socket=/cloudsql/<projectid>:<instancename>
mysql+gaerdbms has been deprecated in SQLAlchemy since version 1.0
I'm leaving the original answer below in case others still find it helpful.
For those who visit this question later (and don't want to read through all the comments), SQLAlchemy now supports Google Cloud SQL as of version 0.7.8 using the connection string / dialect (see: docs):
mysql+gaerdbms:///<dbname>
E.g.:
create_engine('mysql+gaerdbms:///mydb', connect_args={"instance":"myinstance"})
I have proposed an update to the mysql+gaerdmbs:// dialect to support both of Google Cloud SQL APIs (rdbms_apiproxy and rdbms_googleapi) for connecting to Cloud SQL from a non-Google App Engine production instance (ex. your development workstation). The change will also modify the connection string slightly by including the project and instance as part of the string, and not require being passed separately via connect_args.
E.g.
mysql+gaerdbms:///<dbname>?instance=<project:instance>
This will also make it easier to use Cloud SQL with Flask-SQLAlchemy or other extension where you don't explicitly make the create_engine() call.
If you are having trouble connecting to Google Cloud SQL from your development workstation, you might want to take a look at my answer here - https://stackoverflow.com/a/14287158/191902.
Yes,
If you find any bugs in SA+Cloud SQL, please let me know. I wrote the dialect code that was integrated into SQLAlchemy. There's a bit of silly business about how Cloud SQL bubbles up exceptions, so there might be some loose ends there.
For those who prefer PyMySQL over MySQLdb (which is suggested in the accepted answer), the SQLAlchemy connection strings are:
For Production
mysql+pymysql://<USER>:<PASSWORD>#/<DATABASE_NAME>?unix_socket=/cloudsql/<PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>
Please make sure to
Add the SQL instance to your app.yaml:
beta_settings:
cloud_sql_instances: <PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>
Enable the SQL Admin API as it seems to be necessary:
https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview
For Local Development
mysql+pymysql://<USER>:<PASSWORD>#localhost:3306/<DATABASE_NAME>
given that you started the Cloud SQL Proxy with:
cloud_sql_proxy -instances=<PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>=tcp:3306
it is doable, though I haven't used Flask at all so I'm not sure about establishing the connection through that. I got it working through Pyramid and submitted a patch to SQLAlchemy (possibly to the wrong repo) here:
https://bitbucket.org/sqlalchemy/sqlalchemy/pull-request/2/added-a-dialect-for-google-app-engines
That has since been replaced and accepted into SQLAlchemy as
http://www.sqlalchemy.org/trac/ticket/2484
I don't think it's made it way to a release though.
There are some issues with Google SQL throwing different exceptions so we had issues with things like deploying a database automatically. You also need to disable connection pooling using NullPool as mentioned in the second patch.
We've since moved to using the datastore through NDB so I haven't followed the progess of these fixes for a while..
PostgreSQL, pg8000 and flask_sqlalchemy
Adding information in case someone is on the lookout how to use flask_sqlalchemy with PostgreSQL: Using pg8000 as driver, the working connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>
How do you unit test your python DAL that is using postgresql.
In sqlite you could create in-memory database for every test but this cannot be done for postgresql.
I want a library that could be used to setup a database and clean it once the test is done.
I am using Sqlalchemy as my ORM.
pg_tmp(1) is a utility intended to make this task easy. Here is how you might start up a new connection with SQLAlchemy:
from subprocess import check_output
from sqlalchemy import create_engine
url = check_output(['pg_tmp', '-t'])
engine = create_engine(url)
This will spin up a new database that is automatically destroyed in 60 seconds. If a connection is open pg_tmp will wait until all active connections are closed.
Have you tried testing.postgresql?
You can use nose to write your tests, then just use SQLAlchemy to create and clean the test database in your setup/teardown methods.
There's QuickPiggy too, which is capable of cleaning up after itself.
From the docs:
A makeshift PostgresSQL instance can be obtained quite easily:
pig = quickpiggy.Piggy(volatile=True, create_db='somedb')
conn = psycopg2.connect(pig.dsnstring())