Moving this question from DevOps Stack Exchange where it got only 5 views in 2 days:
I would like to query an Azure Database for MySQL Single Server.
I normally interact with this database using a universal database tool (dBeaver) installed onto an Azure VM. Now I would like to interact with this database using Python from outside Azure. Ultimately I would like to write an API (FastAPI) allowing multiple users to connect to the database.
I ran a simple test from a Jupyter notebook, using SQLAlchemy as my ORM and specifying the pem certificate as a connection argument:
import pandas as pd
from sqlalchemy import create_engine
cnx = create_engine('mysql://XXX', connect_args={"ssl": {"ssl_ca": "mycertificate.pem"}})
I then tried reading data from a specific table (e.g. mytable):
df = pd.read_sql('SELECT * FROM mytable', cnx)
Alas I ran into the following error:
'Client with IP address 'XX.XX.XXX.XXX' is not allowed to connect to
this MySQL server'.
According to my colleagues, a way to fix this issue would be to whitelist my IP address.
While this may be an option for a couple of users with static IP addresses I am not sure whether it is a valid solution in the long run.
Is there a better way to access an Azure Database for MySQL Single Server from outside Azure?
As mentioned in comments, you need to whitelist the IP address ranges(s) in the Azure portal for your MySQL database resource. This is a well accepted and secure approach.
Related
I am trying to connect to a SQL Server instance using SQLAlchemy through Python, however I require the SQL connection to come from a specific Windows AD account that isn't the one I run VSCode with. Is there any way to modify the connection string below to explicitly feed SQL Server a Windows login that isn't the same login I am using the run VSCode?
(I am able to connect with this connection string if I "Run as a different User" in VSCode, however the AD accounts with SQL access does not have shared drive access and therefore cannot access shared files, so this won't scale long-term)
import urllib
from sqlalchemy import create_engine
params = urllib.parse.quote_plus('DRIVER={SQL Server};SERVER={server};DATABASE={database};Trusted_Connection=Yes')
engine = create_engine(f'mssql+pyodbc:///?odbc_connect={params}')
I am trying to connect snowflake to Python using my user snowflake credentials but its getting error while executing(I have cross checked my sf credential everything is perfect), Later i have tried to use my colleagues user credentials to connect its working(used the same code but changed the credentials) no error and snowflake is connecting to his account. can anyone help me where would be the problemerror details
There are a couple of URLs that can be used to access your Snowflake account. I recommend SNOWFLAKE_DEPLOYMENT_REGIONLESS
You can run this query on your account to find them:
-- account URLs
select t.value:type::varchar as type,
t.value:host::varchar as host,
t.value:port as port
from table(flatten(input => parse_json(system$whitelist()))) as t
where t.value:type::varchar like 'SNOWFLAKE%';
There are several factors that could be impacting whether or not you can connect, including network policies or firewalls.
You can use SnowCD (Connectivity Diagnostic Tool) to rule out there are any issues connecting to Snowflake from your machine.
If you can connect from your local machine, but are attempting to via python from a remote machine, the issue is very likely a network policy (snowflake defined firewall) has been set to restrict IP addresses that can connect to Snowflake by your Snowflake admin.
If SnowCD reports no errors and network policies are ruled out, reach out to Snowflake support for further investigation.
If for some reason hyphens are not supported in the URL, you can replace them with underscores.
organization_name-account_name (for most URLs and other general
purpose usage)
organization_name_account_name (for scenarios/features where hyphens
are not supported in URLs)
organization_name.account_name (for SQL commands and operations)
Where:
organization_name is the name of your Snowflake organization.
account_name is the unique name of your account within your
organization.
I connecting to database on Azure using authentication ActiveDirectoryPassword.
ss
cnxn = pyodbc.connect('DRIVER='+driver+';SERVER='+host+';UID='+user+';PWD='+password+';Authentication=ActiveDirectoryPassword')
It is working. The issue is that using this connection string I do not specify the DB. It just connecting me to master. How can I switch to DB I need. I have tried different connection strings (with database specified) but only this one works with ActiveDirectiryPassword.
You could try the below :
pyodbc.connect('DRIVER='+driver+';SERVER='+host+';DATABASE='+database+';UID='+user+';PWD='+password+';Authentication=ActiveDirectoryPassword')
I read all documentation related to connecting to MysQL hosted in Cloud SQL from GCF and still can't connect. Also, tried all hints in documentation of SQLAlchemy related to this.
I am using the following connection
con = 'mysql+pymysql://USER:PASSWORD#/MY_DB?unix_socket=/cloudsql/Proj_ID:Zone:MySQL_Instance_ID'
mysqlEngine = sqlalchemy.create_engine(con)
The error I got was:
(pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 111] Connection refused)") (Background on this error at: http://sqlalche.me/e/e3q8)
You need to make sure you are using the correct /cloudsql/<INSTANCE_CONNECTION_NAME> (This is in the format <PROJECT_ID>:<REGION>:<INSTANCE_ID>). This should be all that's needed if your Cloud SQL instance is in the same project and region as your Function.
The GCF docs also strongly recommends limiting your pool to a single connection. This means you should set both pool_size=1 and max_overflow=0 in your engine settings.
If you would like to see an example of how to set these settings, check out this sample application on Github.
I believe that your problem is with the Connection_name represented by <PROJECT_ID>:<REGION>:<INSTANCE_ID> at the end of the con string variable.
Which by the way should be quoted:
con = 'mysql+pymysql://USER:PASSWORD#/MY_DB?unix_socket=/cloudsql/<PROJECT_ID>:<REGION>:<INSTANCE_ID>'
Check if you are writing it right with this command:
gcloud sql instances describe <INSTANCE_ID> | grep connectionName
If this is not the case, keep in mind these considerations present in the Cloud Functions official documentation:
First Generation MySQL instances must be in the same region as your Cloud Function. Second Generation MySQL instances as well as PostgreSQL instances work with Cloud Functions in any region.
Your Cloud Function has access to all Cloud SQL instances in your project. You can access Second Generation MySQL instances as well as PostgreSQL instances in other projects if your Cloud Function's service account (listed on the Cloud Function's General tab in the GCP Console) is added as a member in IAM on the project with the Cloud SQL instance(s) with the Cloud SQL Client role.
After a long thread with Google Support, we found the reason to be: simply we should enable public access to Cloud SQL without any firewall rule. It is undocumented and can drive you crazy, but the silver bullet for the support team is to say: it is in beta!
I was having this issue. Service account was correct, had the correct permissions, same exact connection string as in my App Engine application. Still got this in the logs.
dial unix /cloudsql/project:region:instance connect: no such file or directory
Switching from 2nd generation Cloud Function to 1st generation solved it. Didn't see it documented anywhere that 2nd couldn't connect to Cloud SQL instances.
I have a working Python 3 app on Google App Engine (flexible) connecting to Postgres on Google Cloud SQL. I got it working by following the docs, which at some point have you connecting via psycopg2 to a database specifier like this
postgresql://postgres:password#/dbname?host=/cloudsql/project:us-central1:dbhost
I'm trying to understand how the hostname /cloudsql/project:us-central1:dbhost works. Google calls this an "instance connection string" and it sure looks like it's playing the role of a regular hostname. Only with the / and : it's not a valid name for a DNS resolver.
Is Google's flexible Python environment modified somehow to support special hostnames like this? It looks like stock Python 3 and psycopg2, but maybe it's modified somehow? If so, are those modifications documented anywhere? The docs for the Python runtime don't have anything about this.
It turns out that host=/cloudsql/project:us-central1:dbhost specifies the name of a directory in the filesystem. Inside that directory is a file named .s.PGSQL.5432 which is a Unix domain socket. An instance of Cloud SQL Proxy is listening on that Unix domain socket and forwarding database requests via TCP to the actual database server. Despite the DB connection string being host= it actually names a directory with a Unix socket in it; that's a feature of libpq, the Postgres connection library.
Many thanks to Vadim for answering my question quickly in a comment, just writing up a more complete answer here for future readers.