Is it possible to connect to microsoft sql server (on-prem) using pyodbc in azure notebooks? Connecting to an azure sql db is possible but I can't find any documentation on the on-prem scenario.
Related
Looking for relevant python code to be used in the GCP AI Platform notebook which will be able to query GCP Cloud SQL (specifically Postgresql.) Unfortunately, I haven't found any relevat resources/tutorials from GCP official or even unaffiliated resources.
I work on the Developer Relations team for Cloud SQL at Google. The quickest and simplest way to connect to a Cloud SQL instance from an AI Platform Notebook is to use our newly released Cloud SQL Python Connector. To install, run pip install cloud-sql-python-connector[pg8000]. This will also install the pg8000 driver which is used to connect to Postgres.
Then, you can use the following code to get a connection object which you can use to run queries:
conn = connector.connect(
"project_name:region-name:instance-name",
"pg8000",
user="postgres",
password="password",
db="db_name"
)
An alternative, which is language-independent, is to download and run the Cloud SQL Auth Proxy in a separate terminal from your notebook.
For either of these solutions, make sure to enable the Cloud SQL Admin API first.
I am trying execute Python script from RDS SQL Server 15 version but I didn't find any documentation around this in AWS Will it be possible to do this?
Unfortunately that is not possible as of now. RDS for SQL Server is just Relational Database Service and it does not allow you to execute any program on the RDS instance, except for T-SQL programmability stored within your SQL Server database (triggers, stored procedures, etc).
I'm trying to connect to Cloud SQL running PostgreSQL 11 using a Cloud Function Python script. Ideally, I want to connect to the DB using a simple pandas.DataFrame.to_sql() which requires a sqlalchemy.create_engine() to connect.
This is my current attempt (I've tried both with/without /.s.PGSQL.5432 suffix):
engine = sqlalchemy.create_engine('postgresql+psycopg2://USERNAME:PASSWORD#/DB_NAME?host=/cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432')
engine.connect()
I also try the code sample provided by Google which uses psycopg2 directly but also didn't work
Error: function crashed. Details:
could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/cloudsql/vida-production:us-central1:vida-ops/.s.PGSQL.5432"?
Edit:
Thanks #kurtisvg for pointing this out. Since the Cloud Function is trying to connect to Cloud SQL that is on a different project. I have to add the Cloud Function's service account to IAM of the Cloud SQL project.
After setting IAM, the Python code I use to connect sqlalchemy engine to Cloud SQL is this:
# Don't add the "/.s.PGSQL.5432 suffix" because it will already be added back automatically by the library...
engine = sqlalchemy.create_engine('postgresql+psycopg2://USERNAME:PASSWORD#/DB_NAME?host=/cloudsql/INSTANCE_CONNECTION_NAME')
engine.connect()
Make sure to check the following things:
Have the Cloud SQL Admin API enabled
Make sure the Functions default service account has the Cloud SQL Client IAM role
If you are connecting between two projects, make sure to enable the Admin API and grant permissions (for both projects) to the Service Account that is running the function you are trying to connect from.
Is it possible that a flask web app uploaded to the web app service can connect to databricks? The app should have some dropdown menus and send a command to databricks. Databricks should search the data from the data lake?.
As I known, there are two ways to connect Azure Databricks in Python.
Refer to the offical document Connect to Azure Databricks from Excel, Python, or R, you can download and install Simba Spark ODBC Driver and pyodbc to follow the section Connect from Python to retrieve the data from Azure Databricks. However, I don't think you can follow the Simba offical document About the Simba Spark ODBC Driver for Windows to install and configure the driver on Azure WebApp for Windows, but you can try to follow the document About the Simba Spark ODBC Driver for Unix/Linux to do on Azure WebApp for Linux.
Alternatively, there is a Python package named databricks-connect, as its description said as the figure below, you can try to use it to connect custom applications like flask to Azure Databricks clusters and run Spark code.
The more details for its usage, you can refer to the offical document Databricks Connect and the blog Databricks-Connect - FINALLY!.
Hope it helps.
How do I connect MS SQL Server using Windows Authentication using Airflow Web UI or modifying existing SQLAlchemy/pymssql Python modules?
I have SQL Server SSIS packages which can't use this option https://joethebusinessintelligenceguy.wordpress.com/2013/08/14/ssis-2012-using-sql-authentication-with-dont-save-sensitive-successfully/) hence I'm trying to start the SSIS steps by using Windows Authentication.
I have found following links showing that support indeed exists but I don't know how to implement the same on Airflow -installation (which file to modify or where to create new conn_id).
How do I connect to SQL Server via sqlalchemy using Windows Authentication?
Connecting to MS SQL Server with Windows Authentication using Python?
https://www.mail-archive.com/sqlalchemy#googlegroups.com/msg13620.html
To use Windows Authentication, the Apache Airflow will need same windows access that your Sql Server, and that's strongly not recommended.
So probably is better use an User and Password to Connect your SqlDatabase.
Apache Airflow does it have an info in your documentation to make connections in SQL SERVER in this link