I am using a private key authentication to connect to Snowflake using python,
**This is working successfully when connecting directly using Java Client
import snowflake.connector
import os
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives.asymmetric import dsa
from cryptography.hazmat.primitives import serialization
with open("rsa_key.p8", "rb") as key:
p_key= serialization.load_pem_private_key(
key.read(),
password='XXXXX'.encode(),
backend=default_backend()
)
pkb = p_key.private_bytes(
encoding=serialization.Encoding.DER,
format=serialization.PrivateFormat.PKCS8,
encryption_algorithm=serialization.NoEncryption())
conn = snowflake.connector.connect(
user=XXXXX,
password=XXXXXXX,
account=XXXXXXXXX,
private_key=pkb,
warehouse=XXX,
database=XXXXXX,
schema=XXXX
)
Have masked real values where needed, but these are correct as same as work direct with Java client.
Error:
/usr/lib/python3/dist-packages/jwt/algorithms.py:179: CryptographyDeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
self.hash_alg()
Traceback (most recent call last):
File "tryconnection.py", line 37, in <module>
schema='PUBLIC'
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/__init__.py", line 53, in Connect
return SnowflakeConnection(**kwargs)
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/connection.py", line 189, in __init__
self.connect(**kwargs)
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/connection.py", line 493, in connect
self.__open_connection()
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/connection.py", line 710, in __open_connection
self.__authenticate(auth_instance)
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/connection.py", line 963, in __authenticate
session_parameters=self._session_parameters,
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/auth.py", line 217, in authenticate
socket_timeout=self._rest._connection.login_timeout)
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/network.py", line 530, in _post_request
_include_retry_params=_include_retry_params)
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/network.py", line 609, in fetch
**kwargs)
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/network.py", line 711, in _request_exec_wrapper
raise e
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/network.py", line 653, in _request_exec_wrapper
method, full_url, headers, data, conn)
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/network.py", line 758, in _handle_unknown_error
u'errno': ER_FAILED_TO_REQUEST,
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/errors.py", line 100, in errorhandler_wrapper
connection.errorhandler(connection, cursor, errorclass, errorvalue)
File "/usr/local/lib/python3.6/dist-packages/snowflake/connector/errors.py", line 73, in default_errorhandler
done_format_msg=errorvalue.get(u'done_format_msg'))
snowflake.connector.errors.OperationalError: 250003: None: Failed to get the response. Hanging? method: post, url:
Thank you in advance for your help.
I can not see the rest of the error message so I couldn't be sure. Are you sure enter your account name (and region and cloud if needed) as the account parameter, instead of full Snowflake URL?
https://docs.snowflake.com/en/user-guide/python-connector-example.html#connecting-to-snowflake
When connecting Jira, usually people uses JDBC connection string which includes full snowflake URL:
https://docs.snowflake.com/en/user-guide/python-connector-example.html#connecting-to-snowflake
Related
I just want to learn how to store data in Firestore using Python and Google Cloud Platform, so I'm calling an API to query some example data.
For it, I'm using requests library and Firebase library from google.cloud package.
Here is the code that I'm running at Cloud Shell:
import requests
from google.cloud import firestore
url = "https://api.coindesk.com/v1/bpi/currentprice.json"
r = requests.get(url)
resp: str = r.text
if not (resp=="null" or resp=="[]"):
db = firestore.Client()
doc_ref=db.collection("CoinData").add(r.json())
When the code try to connect to Firebase to add the json of the API response I got this error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/google/api_core/grpc_helpers.py", line 66, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Invalid resource field value in the request."
debug_error_string = "{"created":"#1634689546.004704998","description":"Error received from peer ipv4:74.125.134.95:443","file":"src/core/lib/surface/call.cc","file_line":1070,"grpc_message":"Invalid resource field value in the request.","grpc_status":3}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/antonyare_93/cloudshell_open/pruebas/data_coin_query.py", line 11, in <module>
doc_ref=db.collection("CoinData").add(r.json())
File "/home/antonyare_93/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/collection.py", line 107, in add
write_result = document_ref.create(document_data, **kwargs)
File "/home/antonyare_93/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/document.py", line 99, in create
write_results = batch.commit(**kwargs)
File "/home/antonyare_93/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/batch.py", line 60, in commit
request=request, metadata=self._client._rpc_metadata, **kwargs,
File "/home/antonyare_93/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/services/firestore/client.py", line 815, in commit
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/usr/local/lib/python3.7/dist-packages/google/api_core/gapic_v1/method.py", line 142, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 288, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 190, in retry_target
return target()
File "/usr/local/lib/python3.7/dist-packages/google/api_core/grpc_helpers.py", line 68, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.InvalidArgument: 400 Invalid resource field value in the request.
Anyone knows how I can fix it?
I've just got it.
It was a little mistake, I was missing the name of the project at the firestore client method call, here's the code working:
import requests
from google.cloud import firestore
url = "https://api.coindesk.com/v1/bpi/currentprice.json"
r = requests.get(url)
resp: str = r.text
if not (resp=="null" or resp=="[]"):
db = firestore.Client(project="mytwitterapitest")
doc_ref=db.collection("CoinData").add(r.json())
This error i am getting:
ERROR:boto:Unable to read instance data, giving up
Traceback (most recent call last):
File "<ipython-input-62-476f799f9e0f>", line 2, in <module>
conn = boto.connect_s3()
File "/usr/local/lib/python2.7/dist-packages/boto/__init__.py", line 141, in connect_s3
return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 191, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/local/lib/python2.7/dist-packages/boto/auth.py", line 993, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
This error Message is coming while establishing connection with aws S3Connection.
I want to establish connection with AWS S3 and read CSV files.
please Help me out?
i am using Python 2.7.12
And Now i am using this below code:
import boto
import time
from boto.s3.connection import S3Connection
conn = S3Connection('<aws access key>','<aws secret key>')
print conn
from boto.s3.connection import Location
print '\n'.join(i for i in dir(Location) if i[0].isupper())
conn.create_bucket('egp-shared-prod/egp-prod-c2c1/',
location=Location.DEFAULT)
And, Its show Error:
File "<ipython-input-69-4b49d719d4ca>", line 15, in <module>
conn.create_bucket('egp-shared-prod/egp-prod-c2c1/', location=Location.DEFAULT)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 616, in create_bucket
data=data)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 668, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1071, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1030, in _mexe
raise ex
gaierror: [Errno -2] Name or service not known
I tried your code and my testing has found that the error is related to your bucket name of egp-shared-prod/egp-prod-c2c1/.
The Bucket Restrictions and Limitations documentation says:
Bucket names can contain lowercase letters, numbers, and hyphens.
Slashes are not permitted. Also, they seem to be upsetting the boto code.
Boto (the official AWS Python bindings) that you are using, expect you to save your AWS_ACCESS_KEY_id and AWS_SECRET_ACCESS_KEY in environment variables like so:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
you can pass aws cridential access like:
#Connection with s3 :
s3= boto3.resource(
service_name='s3',
region_name='us-east-1',
aws_secret_access_key='',
aws_access_key_id=''
)
I want to make some very easy tasks on BigQuery via a python script. I found this package which does not work well. Indeed, when I try this code:
from bigquery import get_client
project_id = 'txxxxxxxxxxxxxxxxxx9'
# Service account email address as listed in the Google Developers Console.
service_account = '7xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com'
# PKCS12 or PEM key provided by Google.
key = '/home/fxxxxxxxxxxxx/Dropbox/access_keys/google_storage/xxxxxxxxxxxxxxxxxxxxx.pem'
client = get_client(project_id, service_account=service_account, private_key_file=key, readonly=True)
# Submit an async query.
results = client.get_table_schema('newdataset', 'newtable2')
print('results')
I get this error:
/home/xxxxxx/anaconda3/envs/snakes/bin/python2.7 /home/xxxxxx/Dropbox/Prog/bigQuery_daily_import/src/main.py
Traceback (most recent call last):
File "/home/xxxxxx/Dropbox/Prog/bigQuery_daily_import/src/main.py", line 9, in <module>
client = get_client(project_id, service_account=service_account, private_key_file=key, readonly=True)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/bigquery/client.py", line 83, in get_client
readonly=readonly)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/bigquery/client.py", line 101, in _get_bq_service
service = build('bigquery', 'v2', http=http)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/util.py", line 142, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/googleapiclient/discovery.py", line 196, in build
cache)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/googleapiclient/discovery.py", line 242, in _retrieve_discovery_doc
resp, content = http.request(actual_url)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 565, in new_request
self._refresh(request_orig)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 835, in _refresh
self._do_refresh_request(http_request)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 862, in _do_refresh_request
body = self._generate_refresh_request_body()
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 1541, in _generate_refresh_request_body
assertion = self._generate_assertion()
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 1670, in _generate_assertion
private_key, self.private_key_password), payload)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/_pycrypto_crypt.py", line 121, in from_string
pkey = RSA.importKey(parsed_pem_key)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py", line 665, in importKey
return self._importKeyDER(der)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py", line 588, in _importKeyDER
raise ValueError("RSA key format is not supported")
ValueError: RSA key format is not supported
Process finished with exit code 1
My question: is there a tutorial in python which shows how to communicate easily with BigQuery: importing a dataset from google storage or S3, querying something, exporting the result to google storage.
A lot depends on your environment, and once you've figure that out everything should be super simple. I see the only problem on the error log you pasted is figuring out authentication.
Python pandas has had support for BigQuery for a while:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.gbq.read_gbq.html
And I did a video with the creators of the module:
https://www.youtube.com/watch?v=gLeTDUMb7HY
Now, the simplest and fastest way these days to launch an Jupyter notebook with all of the Google Cloud goodies you mention is our new Google Datalab project:
https://cloud.google.com/datalab/
The only Datalab caveat is that it works on cloud servers, but if you want a fully managed Jupyter/IPython environment, totally secure, persistent, and ready to handle BigQuery, storage, etc... try it out.
Meanwhile, if you are writing a web application look at how other web applications solve this task.
For example, re:dash code to connect to BigQuery:
https://github.com/EverythingMe/redash/blob/master/redash/query_runner/big_query.py
I am running Neo4j 2.2.1 in ubuntu Amazon EC2 instance. When I am trying to connect through python using py2neo-2.0.7, I am getting following error :
py2neo.packages.httpstream.http.SocketError: Operation not permitted
I am able to access the web-interface through http://52.10.**.***:7474/browser/
CODE :-
from py2neo import Graph, watch, Node, Relationship
url_graph_conn = "https://neo4j:password#52.10.**.***:7474/db/data/"
print url_graph_conn
my_conn = Graph(url_graph_conn)
babynames = my_conn.find("BabyName")
for babyname in babynames:
print 2
Error message :-
https://neo4j:password#52.10.**.***:7474/db/data/
Traceback (most recent call last):
File "C:\Users\rharoon002\eclipse_workspace\peace\peace\core\graphconnection.py", line 39, in <module>
for babyname in babynames:
File "C:\Python27\lib\site-packages\py2neo\core.py", line 770, in find
response = self.cypher.post(statement, parameters)
File "C:\Python27\lib\site-packages\py2neo\core.py", line 667, in cypher
metadata = self.resource.metadata
File "C:\Python27\lib\site-packages\py2neo\core.py", line 213, in metadata
self.get()
File "C:\Python27\lib\site-packages\py2neo\core.py", line 258, in get
response = self.__base.get(headers=headers, redirect_limit=redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 966, in get
return self.__get_or_head("GET", if_modified_since, headers, redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 943, in __get_or_head
return rq.submit(redirect_limit=redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 433, in submit
http, rs = submit(self.method, uri, self.body, self.headers)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 362, in submit
raise SocketError(code, description, host_port=uri.host_port)
py2neo.packages.httpstream.http.SocketError: Operation not permitted
You are trying to access neo4j via https on the standard port for http (7474):
url_graph_conn = "https://neo4j:password#52.10.**.***:7474/db/data/"
The standard port for a https connection is 7473. Try:
url_graph_conn = "https://neo4j:password#52.10.**.***:7473/db/data/"
And make sure you can access the web interface via https:
https://52.10.**.***:7473/browser/
You can change/see the port settings in your neo4j-server.properties file.
I wrote the following python code to neo4j using py2neo
from py2neo import Graph
from py2neo import neo4j,Node,Relationship
sgraph = Graph()
alice = Node("person",name="alice")
bob = Node("person",name="bob")
alice_knows_bob = Relationship(alice,"KNOWS",bob)
sgraph.create(alice_knows_bob)
but i got the following error
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\py2neo\core.py", line 258, in get
response = self.__base.get(headers=headers, redirect_limit=redirect_limit, *
*kwargs)
File "C:\Python34\lib\site-packages\py2neo\packages\httpstream\http.py",line
966, in get
return self.__get_or_head("GET", if_modified_since, headers, redirect_limit,
**kwargs)
File "C:\Python34\lib\site-packages\py2neo\packages\httpstream\http.py",line
943, in __get_or_head
return rq.submit(redirect_limit=redirect_limit, **kwargs)
File "C:\Python34\lib\site-packages\py2neo\packages\httpstream\http.py",line
452, in submit
return Response.wrap(http, uri, self, rs, **response_kwargs)
File "C:\Python34\lib\site-packages\py2neo\packages\httpstream\http.py",line
489, in wrap
raise inst
py2neo.packages.httpstream.http.ClientError: 401 Unauthorized
During handling of the above exception, another exception occurr ed:
Traceback (most recent call last):
File "neo.py", line 7, in <module>
sgraph.create(alice_knows_bob)
File "C:\Python34\lib\site-packages\py2neo\core.py", line 704, in create
statement = CreateStatement(self)
File "C:\Python34\lib\site-packages\py2neo\cypher\create.py", 44,in__init__
self.supports_node_labels = self.graph.supports_node_labels
File "C:\Python34\lib\site-packages\py2neo\core.py", line 1078, in supports_node_labels return self.neo4j_version >= (2, 0)
File "C:\Python34\lib\site-packages\py2neo\core.py", line 956, in neo4j_version
return version_tuple(self.resource.metadata["neo4j_version"])
File "C:\Python34\lib\site-packages\py2neo\core.py", line 213, in metadata
self.get()
File "C:\Python34\lib\site-packages\py2neo\core.py", line 261, in get
raise Unauthorized(self.uri.string)
py2neo.error.Unauthorized: http://localhost:7474/db/data/
can anyone please help me.This is the first time i writing python code to connect to neo4j.
If you're using Neo4j 2.2, authentication for database servers is enabled by default. You need to authenticate before performing further operations. Read documentation.
from py2neo import authenticate, Graph
# set up authentication parameters
authenticate("localhost:7474", "user", "pass")
# connect to authenticated graph database
sgraph = Graph("http://localhost:7474/db/data/")
# create alice and bob
...
From the same documentation,
Py2neo provides a command line tool to help with changing user
passwords as well as checking whether a password change is required.
For a new installation, use:
$ neoauth neo4j neo4j my-p4ssword
Password change succeeded
After a password has been set, the tool can also be used to validate
credentials
$ neoauth neo4j my-p4ssword
Password change not required