link snowflake to python - python

I am trying to connect snowflake to Python using my user snowflake credentials but its getting error while executing(I have cross checked my sf credential everything is perfect), Later i have tried to use my colleagues user credentials to connect its working(used the same code but changed the credentials) no error and snowflake is connecting to his account. can anyone help me where would be the problemerror details

There are a couple of URLs that can be used to access your Snowflake account. I recommend SNOWFLAKE_DEPLOYMENT_REGIONLESS
You can run this query on your account to find them:
-- account URLs
select t.value:type::varchar as type,
t.value:host::varchar as host,
t.value:port as port
from table(flatten(input => parse_json(system$whitelist()))) as t
where t.value:type::varchar like 'SNOWFLAKE%';
There are several factors that could be impacting whether or not you can connect, including network policies or firewalls.
You can use SnowCD (Connectivity Diagnostic Tool) to rule out there are any issues connecting to Snowflake from your machine.
If you can connect from your local machine, but are attempting to via python from a remote machine, the issue is very likely a network policy (snowflake defined firewall) has been set to restrict IP addresses that can connect to Snowflake by your Snowflake admin.
If SnowCD reports no errors and network policies are ruled out, reach out to Snowflake support for further investigation.
If for some reason hyphens are not supported in the URL, you can replace them with underscores.
organization_name-account_name (for most URLs and other general
purpose usage)
organization_name_account_name (for scenarios/features where hyphens
are not supported in URLs)
organization_name.account_name (for SQL commands and operations)
Where:
organization_name is the name of your Snowflake organization.
account_name is the unique name of your account within your
organization.

Related

Accessing an Azure Database for MySQL Single Server from outside Azure

Moving this question from DevOps Stack Exchange where it got only 5 views in 2 days:
I would like to query an Azure Database for MySQL Single Server.
I normally interact with this database using a universal database tool (dBeaver) installed onto an Azure VM. Now I would like to interact with this database using Python from outside Azure. Ultimately I would like to write an API (FastAPI) allowing multiple users to connect to the database.
I ran a simple test from a Jupyter notebook, using SQLAlchemy as my ORM and specifying the pem certificate as a connection argument:
import pandas as pd
from sqlalchemy import create_engine
cnx = create_engine('mysql://XXX', connect_args={"ssl": {"ssl_ca": "mycertificate.pem"}})
I then tried reading data from a specific table (e.g. mytable):
df = pd.read_sql('SELECT * FROM mytable', cnx)
Alas I ran into the following error:
'Client with IP address 'XX.XX.XXX.XXX' is not allowed to connect to
this MySQL server'.
According to my colleagues, a way to fix this issue would be to whitelist my IP address.
While this may be an option for a couple of users with static IP addresses I am not sure whether it is a valid solution in the long run.
Is there a better way to access an Azure Database for MySQL Single Server from outside Azure?
As mentioned in comments, you need to whitelist the IP address ranges(s) in the Azure portal for your MySQL database resource. This is a well accepted and secure approach.

AWS Lambda to RDS PostgreSQL

Hello fellow AWS contributors, I’m currently working on a project to set up an example of connecting a Lambda function to our PostgreSQL database hosted on RDS. I tested my Python + SQL code locally (in VS code and DBeaver) and it works perfectly fine with including only basic credentials(host, dbname, username password). However, when I paste the code in Lambda function, it gave me all sorts of errors. I followed this template and modified my code to retrieve the credentials from secret manager instead.
I’m currently using boto3, psycopg2, and secret manager to get credentials and connect to the database.
List of errors I’m getting-
server closed the connection unexpectedly. This probably means the server terminated abnormally before or while processing the request
could not connect to server: Connection timed out. Is the server running on host “db endpoint” and accepting TCP/IP connections on port 5432?
FATAL: no pg_hba.conf entry for host “ip:xxx”, user "userXXX", database "dbXXX", SSL off
Things I tried -
RDS and Lambda are in the same VPC, same subnet, same security group.
IP address is included in the inbound rule
Lambda function is set to run up to 15 min, and it always stops before it even hits 15 min
I tried both database endpoint and database proxy endpoint, none of it works.
It doesn’t really make sense to me that when I run the code locally, I only need to provide the host, dbname, username, and password, that’s it, and I’m able to write all the queries and function I want. But when I throw the code in lambda function, it’s requiring all these secret manager, VPC security group, SSL, proxy, TCP/IP rules etc. Can someone explain why there is a requirement difference between running it locally and on lambda?
Finally, does anyone know what could be wrong in my setup? I'm happy to provide any information in related to this, any general direction to look into would be really helpful. Thanks!
Following the directions at the link below to build a specific psycopg2 package and also verifying the VPC subnets and security groups were configured correctly solved this issue for me.
I built a package for PostgreSQL 10.20 using psycopg2 v2.9.3 for Python 3.7.10 running on an Amazon Linux 2 AMI instance. The only change to the directions I had to make was to put the psycopg2 directory inside a python directory (i.e. "python/psycopg2/") before zipping it -- the import psycopg2 statement in the Lambda function failed until I did that.
https://kalyanv.com/2019/06/10/using-postgresql-with-python-on-aws-lambda.html
This the VPC scenario I'm using. The Lambda function is executing inside the Public Subnet and associated Security Group. Inbound rules for the Private Subnet Security Group only allow TCP connections to 5432 for the Public Subnet Security Group.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html#USER_VPC.Scenario1

How to connect python to Oracle Application Express

I would like to have some help with the parameters passed under the connection string through a .py file trying connection to my Oracle Apex workspace database:
connection = cx_Oracle.connect("user", "password", "dbhost.example.com/dbinstance", encoding="UTF-8")
On the login page at "apex.oracle.com", we have to pass the following information:
Can I assume that the "user" parameter is equal to the USERNAME info, the "password" parameter is equal to the PASSWORD info and the "dbinstance" parameter is equal to the WORKSPACE info?
And what about the hostname? What is it expected as parameter? How do I find it?
Thank you very much for any support.
Those parameters are not equivalent. An APEX workspace is a logical construct that exists only within APEX; it does not correspond to a physical database instance. Username and password do not necessarily correspond to database users, as APEX is capable of multiple methods of authentication.
APEX itself runs entirely within a single physical database. An APEX instance supports multiple logical workspaces, each of which may have its own independent APEX user accounts that often (usually) do not correspond to database users at all. APEX-based apps may have entirely separate authentication methods of their own, too, and generally do not use the same users defined for the APEX workspaces.
When an APEX application does connect to a database to run, it connects as a proxy user using an otherwise unprivileged database account like APEX_PUBLIC_USER.
If you want to connect Python to APEX, you would have to connect like you would any other web app: through the URL using whatever credentials are appropriate to the user interface and then parsing the HTML output, or through an APEX/ORDS REST API (that you would have to first build and deploy).
If you want to connect to the database behind APEX, then you would need an appropriately provisioned database (not APEX) account, credentials and connectivity information provided by the database administrator.

Connecting to Cloud SQL from Google Cloud Function using Python and SQLAlchemy

I read all documentation related to connecting to MysQL hosted in Cloud SQL from GCF and still can't connect. Also, tried all hints in documentation of SQLAlchemy related to this.
I am using the following connection
con = 'mysql+pymysql://USER:PASSWORD#/MY_DB?unix_socket=/cloudsql/Proj_ID:Zone:MySQL_Instance_ID'
mysqlEngine = sqlalchemy.create_engine(con)
The error I got was:
(pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 111] Connection refused)") (Background on this error at: http://sqlalche.me/e/e3q8)
You need to make sure you are using the correct /cloudsql/<INSTANCE_CONNECTION_NAME> (This is in the format <PROJECT_ID>:<REGION>:<INSTANCE_ID>). This should be all that's needed if your Cloud SQL instance is in the same project and region as your Function.
The GCF docs also strongly recommends limiting your pool to a single connection. This means you should set both pool_size=1 and max_overflow=0 in your engine settings.
If you would like to see an example of how to set these settings, check out this sample application on Github.
I believe that your problem is with the Connection_name represented by <PROJECT_ID>:<REGION>:<INSTANCE_ID> at the end of the con string variable.
Which by the way should be quoted:
con = 'mysql+pymysql://USER:PASSWORD#/MY_DB?unix_socket=/cloudsql/<PROJECT_ID>:<REGION>:<INSTANCE_ID>'
Check if you are writing it right with this command:
gcloud sql instances describe <INSTANCE_ID> | grep connectionName
If this is not the case, keep in mind these considerations present in the Cloud Functions official documentation:
First Generation MySQL instances must be in the same region as your Cloud Function. Second Generation MySQL instances as well as PostgreSQL instances work with Cloud Functions in any region.
Your Cloud Function has access to all Cloud SQL instances in your project. You can access Second Generation MySQL instances as well as PostgreSQL instances in other projects if your Cloud Function's service account (listed on the Cloud Function's General tab in the GCP Console) is added as a member in IAM on the project with the Cloud SQL instance(s) with the Cloud SQL Client role.
After a long thread with Google Support, we found the reason to be: simply we should enable public access to Cloud SQL without any firewall rule. It is undocumented and can drive you crazy, but the silver bullet for the support team is to say: it is in beta!
I was having this issue. Service account was correct, had the correct permissions, same exact connection string as in my App Engine application. Still got this in the logs.
dial unix /cloudsql/project:region:instance connect: no such file or directory
Switching from 2nd generation Cloud Function to 1st generation solved it. Didn't see it documented anywhere that 2nd couldn't connect to Cloud SQL instances.

How do database connections work in Google App Engine python + Google Cloud SQL?

I have a working Python 3 app on Google App Engine (flexible) connecting to Postgres on Google Cloud SQL. I got it working by following the docs, which at some point have you connecting via psycopg2 to a database specifier like this
postgresql://postgres:password#/dbname?host=/cloudsql/project:us-central1:dbhost
I'm trying to understand how the hostname /cloudsql/project:us-central1:dbhost works. Google calls this an "instance connection string" and it sure looks like it's playing the role of a regular hostname. Only with the / and : it's not a valid name for a DNS resolver.
Is Google's flexible Python environment modified somehow to support special hostnames like this? It looks like stock Python 3 and psycopg2, but maybe it's modified somehow? If so, are those modifications documented anywhere? The docs for the Python runtime don't have anything about this.
It turns out that host=/cloudsql/project:us-central1:dbhost specifies the name of a directory in the filesystem. Inside that directory is a file named .s.PGSQL.5432 which is a Unix domain socket. An instance of Cloud SQL Proxy is listening on that Unix domain socket and forwarding database requests via TCP to the actual database server. Despite the DB connection string being host= it actually names a directory with a Unix socket in it; that's a feature of libpq, the Postgres connection library.
Many thanks to Vadim for answering my question quickly in a comment, just writing up a more complete answer here for future readers.

Categories

Resources