I am trying to connect to my free IBM Cloudant instance using SQLAlchemy but I'm unsure of the notation and I keep getting errors.
I've tried the following but it returns the error sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:cloudant.
engine = create_engine(f"cloudant:///?User={USER}&; Password={PASSWORD}", echo=True)
Any advice will be appreciated.
SQLAlchemy does not have built-in support either for IBM Cloudant itself, or for the CouchDB platform on which it is based. The list of external dialects for SQLAlchemy also makes no mention of either database. So it appears that there is no "officially recognized" dialect for IBM Cloudant at the moment.
That said, a web search for "sqlalchemy cloudant" did find this:
https://www.cdata.com/kb/tech/cloudant-python-sqlalchemy.rst
(The above link is for information purposes only. It is not an endorsement.)
Related
link da questão
Briefly, I want to know what this "DB-API" mechanism is.
Are there multiple DB-APIs (there are more than 1 DB-API)?
Is it just a 'rules' document?
have a source code?
What is it for?
Is psycopg2 an example of a DB-API or is it a library that follows DB-APIs standards?
Is the DB-API specified in SQLAlchemy a SQLAlchemy-specific DB-API (if that is possible)?
I think that's it !!!
Regarding the dialect, I ask another question later.
The python db api is defined in https://www.python.org/dev/peps/pep-0249/ and I believe just a spec or as you say rules document.
Modules like psycopg2 fulfill those requirements, so are an implementation of that api. SqlAlchemy allows you to swap out which db api implementation you use so you can change your underlying database server or use features offered by another driver/db api implementation and still use the same database server.
As I understand it SqlAlchemy supports multiple db api implementations which you specify using a connection uri, explained here https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls.
AWS Recently launched the Data API. This simplifies creating Lambda functions, eliminating the necessity for additional complexity by allowing API calls, instead of direct database connections.
I'm trying to use SQLAlchemy in an AWS Lambda Function, and I'd really like to take advantage of this new API.
Does anyone know if there is any support for this, or if support for this is coming?
Alternatively, how difficult would it be to create a new Engine to support this?
SQLAlchemy calls database drivers "dialects". So if you're using SQLAlchemy with PostgreSQL and using psycopg2 as the driver, then you're using the psycopg2 dialect of PostgreSQL.
I was looking for the same thing as you, and found no existing solution, so I wrote my own and published it. To use the AWS Aurora RDS Data API, I created a SQL dialect package for it, sqlalchemy-aurora-data-api. This in turn required me to write a DB-API compatible Python DB driver for Aurora Data API, aurora-data-api. After installing with pip install sqlalchemy-aurora-data-api, you can use it like this:
from sqlalchemy import create_engine
cluster_arn = "arn:aws:rds:us-east-1:123456789012:cluster:my-aurora-serverless-cluster"
secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:MY_DB_CREDENTIALS"
engine = create_engine('postgresql+auroradataapi://:#/my_db_name',
echo=True,
connect_args=dict(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn))
with engine.connect() as conn:
for result in conn.execute("select * from pg_catalog.pg_tables"):
print(result)
As an alternative, if you want something more like Records, you can try Camus https://github.com/rizidoro/camus.
Our infrastructure group has asked us to "add MultiSubnetFailover=True to all application connection strings" so that we can take advantage of a new SQL Server HA setup involving Availability Groups.
I am stuck though since we have some python programs that connect (read+write) to the database via SQL Alchemy. I have been searching and I don't see anything about this MultiSubnetFailover feature being available as an option in SQL Alchemy or any other Python driver. Is it possible to connect to an HA setup utilizing the SQL Alchemy driver, or even Python, and if so how?
FYI - The link that my infrastructure guy pointed me to is here (http://msdn.microsoft.com/en-us/library/hh205662%28v=vs.110%29.aspx), and as you can see it is specifically about how .NET applications can utilize the "MultiSubnetFailover=True" setting in the connection string among other things.
http://docs.sqlalchemy.org/en/latest/dialects/mssql.html#dialect-mssql-pyodbc-connect
You could use the example towards the end of the documentation's section like this:
import urllib
from sqlalchemy import create_engine
connection_string = '127.0.0.1;Database=MyDatabase;MultiSubnetFailover=True'
engine_string = 'mssql+pyodbc:///?odbc_connect={}'.format(urllib.quote_plus(connection_string))
engine = create_engine(engine_string)
Update from comments
For newer versions of Microsoft ODBC Driver for SQL Server, you may need to use MultiSubnetFailover=Yes instead of True
I've looked over Google Cloud SQL's documentation and various searches, but I can't find out whether it is possible to use SQLAlchemy with Google Cloud SQL, and if so, what the connection URI should be.
I'm looking to use the Flask-SQLAlchemy extension and need the connection string like so:
mysql://username:password#server/db
I saw the Django example, but it appears the configuration uses a different style than the connection string. https://developers.google.com/cloud-sql/docs/django
Google Cloud SQL documentation:
https://developers.google.com/cloud-sql/docs/developers_guide_python
Update
Google Cloud SQL now supports direct access, so the MySQLdb dialect can now be used. The recommended connection via the mysql dialect is using the URL format:
mysql+mysqldb://root#/<dbname>?unix_socket=/cloudsql/<projectid>:<instancename>
mysql+gaerdbms has been deprecated in SQLAlchemy since version 1.0
I'm leaving the original answer below in case others still find it helpful.
For those who visit this question later (and don't want to read through all the comments), SQLAlchemy now supports Google Cloud SQL as of version 0.7.8 using the connection string / dialect (see: docs):
mysql+gaerdbms:///<dbname>
E.g.:
create_engine('mysql+gaerdbms:///mydb', connect_args={"instance":"myinstance"})
I have proposed an update to the mysql+gaerdmbs:// dialect to support both of Google Cloud SQL APIs (rdbms_apiproxy and rdbms_googleapi) for connecting to Cloud SQL from a non-Google App Engine production instance (ex. your development workstation). The change will also modify the connection string slightly by including the project and instance as part of the string, and not require being passed separately via connect_args.
E.g.
mysql+gaerdbms:///<dbname>?instance=<project:instance>
This will also make it easier to use Cloud SQL with Flask-SQLAlchemy or other extension where you don't explicitly make the create_engine() call.
If you are having trouble connecting to Google Cloud SQL from your development workstation, you might want to take a look at my answer here - https://stackoverflow.com/a/14287158/191902.
Yes,
If you find any bugs in SA+Cloud SQL, please let me know. I wrote the dialect code that was integrated into SQLAlchemy. There's a bit of silly business about how Cloud SQL bubbles up exceptions, so there might be some loose ends there.
For those who prefer PyMySQL over MySQLdb (which is suggested in the accepted answer), the SQLAlchemy connection strings are:
For Production
mysql+pymysql://<USER>:<PASSWORD>#/<DATABASE_NAME>?unix_socket=/cloudsql/<PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>
Please make sure to
Add the SQL instance to your app.yaml:
beta_settings:
cloud_sql_instances: <PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>
Enable the SQL Admin API as it seems to be necessary:
https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview
For Local Development
mysql+pymysql://<USER>:<PASSWORD>#localhost:3306/<DATABASE_NAME>
given that you started the Cloud SQL Proxy with:
cloud_sql_proxy -instances=<PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>=tcp:3306
it is doable, though I haven't used Flask at all so I'm not sure about establishing the connection through that. I got it working through Pyramid and submitted a patch to SQLAlchemy (possibly to the wrong repo) here:
https://bitbucket.org/sqlalchemy/sqlalchemy/pull-request/2/added-a-dialect-for-google-app-engines
That has since been replaced and accepted into SQLAlchemy as
http://www.sqlalchemy.org/trac/ticket/2484
I don't think it's made it way to a release though.
There are some issues with Google SQL throwing different exceptions so we had issues with things like deploying a database automatically. You also need to disable connection pooling using NullPool as mentioned in the second patch.
We've since moved to using the datastore through NDB so I haven't followed the progess of these fixes for a while..
PostgreSQL, pg8000 and flask_sqlalchemy
Adding information in case someone is on the lookout how to use flask_sqlalchemy with PostgreSQL: Using pg8000 as driver, the working connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>
To my surprise, I haven't found this question asked elsewhere. Short version, I'm writing an app that I plan to deploy to the cloud (probably using Heroku), which will do various web scraping and data collection. The reason it'll be in the cloud is so that I can have it be set to run on its own every day and pull the data to its database without my computer being on, as well as so the rest of the team can access the data.
I used to use AWS's SimpleDB and DynamoDB, but I found SDB's storage limitations to be to small and DDB's poor querying ability to be a problem, so I'm looking for a database system (SQL or NoSQL) that can store arbitrary-length values (and ideally arbitrary data structures) and that can be queried on any field.
I've found many database solutions for Heroku, such as ClearDB, but all of the information I've seen has shown how to set up Django to access the database. Since this is intended to be script and not a site, I'd really prefer not to dive into Django if I don't have to.
Is there any kind of database that I can hook up to in Heroku with Python without using Django?
You can get a database provided from Heroku without requiring your app to use Django. To do so:
heroku addons:add heroku-postgresql:dev
If you need a larger more dedicated database, you can examine the plans at Heroku Postgres
Within your requirements.txt you'll want to add:
psycopg2
Then you can connect/interact with it similar to the following:
import psycopg2
import os
import urlparse
urlparse.uses_netloc.append('postgres')
url = urlparse.urlparse(os.environ['DATABASE_URL'])
conn = psycopg2.connect("dbname=%s user=%s password=%s host=%s " % (url.path[1:], url.username, url.password, url.hostname))
cur = conn.cursor()
query = "SELECT ...."
cur.execute(query)
I'd use MongoDB. Heroku has support for it, so I think it will be really easy to start and scale out: https://addons.heroku.com/mongohq
About Python: MongoDB is a really easy database. The schema is flexible and fits really well with Python dictionaries. That's something really good.
You can use PyMongo
from pymongo import Connection
connection = Connection()
# Get your DB
db = connection.my_database
# Get your collection
cars = db.cars
# Create some objects
import datetime
car = {"brand": "Ford",
"model": "Mustang",
"date": datetime.datetime.utcnow()}
# Insert it
cars.insert(car)
Pretty simple, uh?
Hope it helps.
EDIT:
As Endophage mentioned, another good option for interfacing with Mongo is mongoengine. If you have lots of data to store, you should take a look at that.
I did this recently with Flask. (https://github.com/HexIce/flask-heroku-sqlalchemy).
There are a couple of gotchas:
1. If you don't use Django you may have to set up your database yourself by doing:
heroku addons:add shared-database
(Or whichever database you want to use, the others cost money.)
2. The database URL is stored in Heroku in the "DATABASE_URL" environment variable.
In python you can get it by doing.
dburl = os.environ['DATABASE_URL']
What you do to connect to the database from there is up to you, one option is SQLAlchemy.
Create a standalone Heroku Postgres database. http://postgres.heroku.com