I'm trying to setup CloudSearch. At first I tried it with their demo dataset ("IMDB") and it all worked just perfect.
Then I created a new domain to export our data into it. But all the attempts to connect to the new domain result in an EndpointConnectionError exception. I tried it with and without indexes, upload and get documents, all with the same exception.
A simple code which reproduces the issue:
import boto3
cloudsearch = boto3.client('cloudsearch') # we store credentials in ~/.aws/
endpoint_url = cloudsearch.describe_domains(DomainNames=['DOMAINNAME',])['DomainStatusList'][0]['SearchService']['Endpoint']
cloudsearchdomain = boto3.client('cloudsearchdomain', endpoint_url='https://%s' % (endpoint_url,))
result = cloudsearchdomain.search(query='anything')
print result
This code was working great when DOMAINNAME was the domain with the IMDB demo dataset, but once I switched it to the new domain name it started throwing this exception:
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://search-DOMAINNAME-bcoaescnsbrp2h5ojzyhljdc4u.us-west-2.cloudsearch.amazonaws.com/2013-01-01/documents/batch?format=sdk"
The problem was caused by the missing access policies. It seems like AWS auto-creates the policies for the domain when you create the demo dataset without notifying about it. So for the first domain access policies were created by AWS and I didn't know about that.
Related
I'm trying to access Azure databricks spark cluster by a python script which takes token as an input generated via databricks user settings and calling a Get method to get the details of the cluster alongwith the cluster-id.
The below is the code snippet. As shown, I have created a cluster in southcentralus zone.
import requests
headers = {"Authorization":"Bearer dapiad************************"}
data=requests.get("https://southcentralus.azuredatabricks.net/api/2.0/clusters/get?cluster_id=**************",headers=headers).text
print data
Expected result should give the full detail of the cluster eg.
{"cluster_id":"0128-******","spark_context_id":3850138716505089853,"cluster_name":"abcdxyz","spark_version":"5.1.x-scala2.11","spark_conf":{"spark.databricks.delta.preview.enabled":"true"},"node_type_id" and so on .....}
The above code is working when I execute the code on google colaboratory whereas the same is not working with my local IDE i.e. idle. It gives the error of HTTP 403 stating as below:
<p>Problem accessing /api/2.0/clusters/get. Reason:
<pre> Invalid access token.</pre></p>
Can anyone help me resolve the issue? I'm stuck on this part and not able to access the cluster through APIs.
It could be due to encoding issue when you pass the secret. Please look into this issue and how to resolve it. Even though the resolution they have given for AWS,it could be similar one for Azure as well. Your secret might be having "/", which you have to replace.
There is a known problem in the last update related to the '+'
character in secret keys. In particular, we no longer support escaping
'+' into '%2B', which some URL-encoding libraries do.
The current best-practice way of encoding your AWS secret key is
simply
secretKey.replace("/","%2F")
sample python script is given below:
New_Secret_key = "MySecret/".replace("/","%2F")
https://forums.databricks.com/questions/6590/s3serviceexception-raised-when-accessing-via-mount.html
https://forums.databricks.com/questions/6621/responsecode403-responsemessageforbidden.html
I am new to azure.I am learning azure python sdk and have some doubts.
I am not using any credentials to log in azure account and still can access
VM's in subscription in my code below, how?
I am trying to get list of all VM's using list_all() which is present in azure doc https://learn.microsoft.com/en-us/python/api/azure-mgmt-compute/azure.mgmt.compute.v2018_10_01.operations.virtualmachinesoperations?view=azure-python#list-all-custom-headers-none--raw-false----operation-config-
, How can i get list of VM's or how to iterate over VirtualMachinePaged object return by list_all() to get list of VM's?
When i tried to print name of VM using #print(client.virtual_machines.get(resource_group_name='GSLab', vm_name='GSLabVM2')) i got error Resource group 'GSLab' could not be found.
, i checked and sure that name of resource group in 'GSLab', so why am i getting this error?
Here is my code, Thank you and please suggest any other source for better understanding of these concepts if possible.
from azure.common.client_factory import get_client_from_auth_file
from azure.mgmt.compute import ComputeManagementClient
client = get_client_from_auth_file(ComputeManagementClient)
#print(client)
vmlist = client.virtual_machines.list_all()
print(vmlist)
for vm in vmlist:
print(vm.name)
print(client.virtual_machines.get(resource_group_name='GSLab', vm_name='GSLabVM2'))
Q1: You get the credentials from the Authentication file that you set and the service principal is in it.
Q2: You just need to delete the print(vmlist) and then everything is OK.
Q3:
The code:
client.virtual_machines.get(resource_group_name='GSLab', vm_name='GSLabVM2')
The result will like this:
So you need to check that if the resource group 'GSLab' really exist in the subscription you have set in the Authentication file.
vmlist = client.virtual_machines.list_all()
for vm in vmlist:
print(vm.name)
this code is correct and this one as well:
client.virtual_machines.get(resource_group_name='GSLab', vm_name='GSLabVM2')
if they both return nothing you authenticated to the wrong subscription, you need to auth to the proper subscription.
simple way to check you got some output:
vmlist.next().name
I am trying to automate passing copies of snapshot backups to different regions using this code (AWS Lambda - Copy EC2 Snapshot automatically between regions?).
I have tried using boto3 library in python but I keep getting this error:
EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-1a.amazonaws.com/"
Using this code:
client = boto3.client('ec2')
client.describe_snapshots(OwnerIds=['self'])
I have ensured that my config file has the right security keys. Not sure what else I can do to retrieve the pieces of information I need.
config file was like this:
region=us-east-1a
changed it to:
region=us-east-1
works for other regions.(example region=us-east-2c is region=us-east-1)
I created a Shared Access Signature(SAS) token on my Azure storage account using the web interface. The token looks like
?sv=xxxx-xx-xx&ss=b&srt=sco&sp=rl&se=xxxx-xx-xxTxx:xx:xxZ&st=xxxx-xx-xxTxx:xx:xxZ&spr=https&sig=xxxxxxxxxxxxxxxxxxxxxx
The SAS token here is missing the sr field for Service Resource. I have to manually prepend the sr=b to the query string to get things to work. I must be doing something wrong, because this seems extremely finicky.
from azure.storage.blob import BlockBlobService
sas_token = "?sv=xxxx-xx-xx&ss=b&srt=sco&sp=rl&se=xxxx-xx-xxTxx:xx:xxZ&st=xxxx-xx-xxTxx:xx:xxZ&spr=https&sig=xxxxxxxxxxxxxxxxxxxxxx"
sas_token = "?sr=b&" + sas_token[1:]
serv = BlockBlobService(account_name='myaccount', sas_token=sas_token)
for cont in serv.list_containers():
print cont.name
Without the sas_token = "?sr=b&" + sas_token[1:] I get the error:
sr is mandatory. Cannot be empty
And if the sr=b field is not first in the query, I get an authentication error like
Access without signed identifier cannot have time window more than 1 hour
Access without signed identifier cannot have time window more than 1 hour
Based on this error message, you may need to set expiry time less than 1 hour from now. See Windows Azure Shared Access Signature always gives: Forbidden 403.
I took your code with Python v2.7.12 and #azure-storage-python v0.34.3 (the latest version). And it worked well on my site. So, I'd recommend you upgrade to latest version and try it again.
UPDATE:
I traced the code of Azure Storage SDK for Python and here's what I found. The SDK is a REST API warpper which assumes that the SAS token looks like this:
sv=2015-04-05&ss=bfqt&srt=sco&sp=rl&se=2015-09-20T08:49Z&sip=168.1.5.60-168.1.5.70&sig=a39%2BYozJhGp6miujGymjRpN8tsrQfLo9Z3i8IRyIpnQ%3d
As you can see, the token doesn't include ?. And the SDK will append ? before the SAS token when it makes a GET request to the Azure Storage REST service.
This would cause that the key of the signed version was parsed as ?sv, then it raised the issue. So, to avoid this, we should remove the ? from the SAS token.
I'm using facepy for retrieving fb comments and posts (This is the objective). I've given a valid access token (generated from http://developers.facebook.com/tools/explorer/?method=GET&path=me)
The error generated is:
NameError: name 'AAACEdEose0cBAHshwZCGJ6dHPb0x68c.......D' is not defined. And/OR
facepy.exceptions.OAuthError: [190] Error validating access token: Session has expired at unix time 1359752400.
I believe I've generated the correct access token (with the correct permissions) I thought I would test the facepy library for retrieving photos first. Is their a faster way to retrieve comments, respective user ids, and time stamp from a public fan page (I am guessing most of my target pages have 'public information')
Can someone help me on this?
from facepy import GraphAPI
graph = GraphAPI(AAACEdEose0cBAHshwZCGJ6dHPb0x68c.......D)
graph.get('me/posts')
graph.post(
path = 'me/photos',
source = open('parrot.jpg')
Never tried with the FB but some of the solutions which worked with other API's.
Error validating access token: Session has expired at unix time 1359752400.
Says All. However mostly occurs when I did connected it to some DB for storing some information retrieved from the API.I am sure you have made a right key and password so don't worry about it. Make sure the other connection is still open.
Second, it happened again when I did not accessed the API for weeks so make sure to log in once or twice.
Your API needs to be in quotes...
also generate an extended token via the below