I read all documentation related to connecting to MysQL hosted in Cloud SQL from GCF and still can't connect. Also, tried all hints in documentation of SQLAlchemy related to this.
I am using the following connection
con = 'mysql+pymysql://USER:PASSWORD#/MY_DB?unix_socket=/cloudsql/Proj_ID:Zone:MySQL_Instance_ID'
mysqlEngine = sqlalchemy.create_engine(con)
The error I got was:
(pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 111] Connection refused)") (Background on this error at: http://sqlalche.me/e/e3q8)
You need to make sure you are using the correct /cloudsql/<INSTANCE_CONNECTION_NAME> (This is in the format <PROJECT_ID>:<REGION>:<INSTANCE_ID>). This should be all that's needed if your Cloud SQL instance is in the same project and region as your Function.
The GCF docs also strongly recommends limiting your pool to a single connection. This means you should set both pool_size=1 and max_overflow=0 in your engine settings.
If you would like to see an example of how to set these settings, check out this sample application on Github.
I believe that your problem is with the Connection_name represented by <PROJECT_ID>:<REGION>:<INSTANCE_ID> at the end of the con string variable.
Which by the way should be quoted:
con = 'mysql+pymysql://USER:PASSWORD#/MY_DB?unix_socket=/cloudsql/<PROJECT_ID>:<REGION>:<INSTANCE_ID>'
Check if you are writing it right with this command:
gcloud sql instances describe <INSTANCE_ID> | grep connectionName
If this is not the case, keep in mind these considerations present in the Cloud Functions official documentation:
First Generation MySQL instances must be in the same region as your Cloud Function. Second Generation MySQL instances as well as PostgreSQL instances work with Cloud Functions in any region.
Your Cloud Function has access to all Cloud SQL instances in your project. You can access Second Generation MySQL instances as well as PostgreSQL instances in other projects if your Cloud Function's service account (listed on the Cloud Function's General tab in the GCP Console) is added as a member in IAM on the project with the Cloud SQL instance(s) with the Cloud SQL Client role.
After a long thread with Google Support, we found the reason to be: simply we should enable public access to Cloud SQL without any firewall rule. It is undocumented and can drive you crazy, but the silver bullet for the support team is to say: it is in beta!
I was having this issue. Service account was correct, had the correct permissions, same exact connection string as in my App Engine application. Still got this in the logs.
dial unix /cloudsql/project:region:instance connect: no such file or directory
Switching from 2nd generation Cloud Function to 1st generation solved it. Didn't see it documented anywhere that 2nd couldn't connect to Cloud SQL instances.
Related
Moving this question from DevOps Stack Exchange where it got only 5 views in 2 days:
I would like to query an Azure Database for MySQL Single Server.
I normally interact with this database using a universal database tool (dBeaver) installed onto an Azure VM. Now I would like to interact with this database using Python from outside Azure. Ultimately I would like to write an API (FastAPI) allowing multiple users to connect to the database.
I ran a simple test from a Jupyter notebook, using SQLAlchemy as my ORM and specifying the pem certificate as a connection argument:
import pandas as pd
from sqlalchemy import create_engine
cnx = create_engine('mysql://XXX', connect_args={"ssl": {"ssl_ca": "mycertificate.pem"}})
I then tried reading data from a specific table (e.g. mytable):
df = pd.read_sql('SELECT * FROM mytable', cnx)
Alas I ran into the following error:
'Client with IP address 'XX.XX.XXX.XXX' is not allowed to connect to
this MySQL server'.
According to my colleagues, a way to fix this issue would be to whitelist my IP address.
While this may be an option for a couple of users with static IP addresses I am not sure whether it is a valid solution in the long run.
Is there a better way to access an Azure Database for MySQL Single Server from outside Azure?
As mentioned in comments, you need to whitelist the IP address ranges(s) in the Azure portal for your MySQL database resource. This is a well accepted and secure approach.
Okay so I have an azure cosmos subscription, where I have created a Mongo DB resource, Now when I am using python SDK to connect it, now it's given when 104, error, connection reset by peer.
Now I am not sure what's the issue,
I am using endpoint with SSL True and Primary Key.
code
endpoint = "http://XXX.mongo.cosmos.azure.com:10255/?ssl=true"
key = 'xxxxxxxxxxxxxxxx'
# <create_cosmos_client>
client = CosmosClient(endpoint, key)
When choosing the MongoDB API, you must use a native MongoDB SDK (in your case, pymongo); the wire protocol is MongoDB, and operations are performed via the same protocol as MongoDB.
Your code is attempting to use the Cosmos DB SDK, which is specific to, and will only work with, the Core (SQL) API.
If you look in the portal blade for your MongoDB-API instance, you'll see examples under Quick Start tab, which each use a MongoDB SDK in its examples (or the mongo shell). Same thing with the Connection Strings tab, showing native MongoDB connection strings (as well as the separate parts of the connection string).
I have a working Python 3 app on Google App Engine (flexible) connecting to Postgres on Google Cloud SQL. I got it working by following the docs, which at some point have you connecting via psycopg2 to a database specifier like this
postgresql://postgres:password#/dbname?host=/cloudsql/project:us-central1:dbhost
I'm trying to understand how the hostname /cloudsql/project:us-central1:dbhost works. Google calls this an "instance connection string" and it sure looks like it's playing the role of a regular hostname. Only with the / and : it's not a valid name for a DNS resolver.
Is Google's flexible Python environment modified somehow to support special hostnames like this? It looks like stock Python 3 and psycopg2, but maybe it's modified somehow? If so, are those modifications documented anywhere? The docs for the Python runtime don't have anything about this.
It turns out that host=/cloudsql/project:us-central1:dbhost specifies the name of a directory in the filesystem. Inside that directory is a file named .s.PGSQL.5432 which is a Unix domain socket. An instance of Cloud SQL Proxy is listening on that Unix domain socket and forwarding database requests via TCP to the actual database server. Despite the DB connection string being host= it actually names a directory with a Unix socket in it; that's a feature of libpq, the Postgres connection library.
Many thanks to Vadim for answering my question quickly in a comment, just writing up a more complete answer here for future readers.
I am attempting to connected to a Google Cloud SQL instance in python and have gone through google's tutorial: https://cloud.google.com/appengine/docs/python/cloud-sql/
I am essentially cloning google's tutorial code and for some reason this line isn't working right for me:
if (os.getenv('SERVER_SOFTWARE') and
os.getenv('SERVER_SOFTWARE').startswith('Google App Engine/')):
This if statement is not being entered and I'm not sure why - it is then defaulting to accessing a local database based on the else statement. How is the os server_software environment set? I'm new to all of this but basically because that is not getting set, I am not able to access my google cloud sql instance. How do I make sure this if statement is entered?
SERVER_SOFTWARE is an environment variable that is automatically set by GAE. It could either be something like Google App Engine/x.x.xx when deployed or Development/x.x when running locally.
Basically the section of the code you're referring to checks whether your app is deployed and is running on GAE servers and if so - it will connect to a Google Cloud SQL instance, otherwise, if your app is running locally, it will attempt to connect to a local mysql instance.
It's done that way because you wouldn't normally want to mess with your production (deployed) data while developing & testing locally as many thing could go wrong.
Since you're stating that the if statement is not being entered - it's safe to assume that you are trying to run the program locally but are expecting it to connect to a Google Cloud SQL instance, for that, the next few lines in the example you provided are explaining how to do it:
db = MySQLdb.connect(host='127.0.0.1', port=3306, db='guestbook', user='root', charset='utf8')
# Alternatively, connect to a Google Cloud SQL instance using:
# db = MySQLdb.connect(host='ip-address-of-google-cloud-sql-instance', port=3306, user='root', charset='utf8')
so what you need to do is comment out the first line (the one it attempts to connect to a localhost mysql server) and uncomment the one where it connects to the Google Cloud SQL instance (note that you will have to update several parameters that reflect the configuration that you have, i.e. the host parameter and possible others).
I've looked over Google Cloud SQL's documentation and various searches, but I can't find out whether it is possible to use SQLAlchemy with Google Cloud SQL, and if so, what the connection URI should be.
I'm looking to use the Flask-SQLAlchemy extension and need the connection string like so:
mysql://username:password#server/db
I saw the Django example, but it appears the configuration uses a different style than the connection string. https://developers.google.com/cloud-sql/docs/django
Google Cloud SQL documentation:
https://developers.google.com/cloud-sql/docs/developers_guide_python
Update
Google Cloud SQL now supports direct access, so the MySQLdb dialect can now be used. The recommended connection via the mysql dialect is using the URL format:
mysql+mysqldb://root#/<dbname>?unix_socket=/cloudsql/<projectid>:<instancename>
mysql+gaerdbms has been deprecated in SQLAlchemy since version 1.0
I'm leaving the original answer below in case others still find it helpful.
For those who visit this question later (and don't want to read through all the comments), SQLAlchemy now supports Google Cloud SQL as of version 0.7.8 using the connection string / dialect (see: docs):
mysql+gaerdbms:///<dbname>
E.g.:
create_engine('mysql+gaerdbms:///mydb', connect_args={"instance":"myinstance"})
I have proposed an update to the mysql+gaerdmbs:// dialect to support both of Google Cloud SQL APIs (rdbms_apiproxy and rdbms_googleapi) for connecting to Cloud SQL from a non-Google App Engine production instance (ex. your development workstation). The change will also modify the connection string slightly by including the project and instance as part of the string, and not require being passed separately via connect_args.
E.g.
mysql+gaerdbms:///<dbname>?instance=<project:instance>
This will also make it easier to use Cloud SQL with Flask-SQLAlchemy or other extension where you don't explicitly make the create_engine() call.
If you are having trouble connecting to Google Cloud SQL from your development workstation, you might want to take a look at my answer here - https://stackoverflow.com/a/14287158/191902.
Yes,
If you find any bugs in SA+Cloud SQL, please let me know. I wrote the dialect code that was integrated into SQLAlchemy. There's a bit of silly business about how Cloud SQL bubbles up exceptions, so there might be some loose ends there.
For those who prefer PyMySQL over MySQLdb (which is suggested in the accepted answer), the SQLAlchemy connection strings are:
For Production
mysql+pymysql://<USER>:<PASSWORD>#/<DATABASE_NAME>?unix_socket=/cloudsql/<PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>
Please make sure to
Add the SQL instance to your app.yaml:
beta_settings:
cloud_sql_instances: <PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>
Enable the SQL Admin API as it seems to be necessary:
https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview
For Local Development
mysql+pymysql://<USER>:<PASSWORD>#localhost:3306/<DATABASE_NAME>
given that you started the Cloud SQL Proxy with:
cloud_sql_proxy -instances=<PUT-SQL-INSTANCE-CONNECTION-NAME-HERE>=tcp:3306
it is doable, though I haven't used Flask at all so I'm not sure about establishing the connection through that. I got it working through Pyramid and submitted a patch to SQLAlchemy (possibly to the wrong repo) here:
https://bitbucket.org/sqlalchemy/sqlalchemy/pull-request/2/added-a-dialect-for-google-app-engines
That has since been replaced and accepted into SQLAlchemy as
http://www.sqlalchemy.org/trac/ticket/2484
I don't think it's made it way to a release though.
There are some issues with Google SQL throwing different exceptions so we had issues with things like deploying a database automatically. You also need to disable connection pooling using NullPool as mentioned in the second patch.
We've since moved to using the datastore through NDB so I haven't followed the progess of these fixes for a while..
PostgreSQL, pg8000 and flask_sqlalchemy
Adding information in case someone is on the lookout how to use flask_sqlalchemy with PostgreSQL: Using pg8000 as driver, the working connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>