Boto3 ec2 describe_instances always returns empty - python

I have a very simple sandbox I'm trying to get to work so I can use it in a bigger application:
ec2_client = boto3.client(
'ec2',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY)
response = ec2_client.describe_instances()
print(response)
and it results in...
{
'Reservations': [],
'ResponseMetadata': {
'RequestId': '2c28e8aa-da6d-4ca4-8ea7-f672518cac9f',
'HTTPStatusCode': 200,
'HTTPHeaders': {
'content-type': 'text/xml;charset=UTF-8',
'transfer-encoding': 'chunked',
'vary': 'Accept-Encoding',
'date': 'Thu, 07 Dec 2017 16:44:30 GMT',
'server': 'AmazonEC2'
},
'RetryAttempts': 0}
}
But the problem is no matter how many times I run this Reservations is ALWAYS empty :(.
In AWS consonle I can CLEARLLY see an instance is running...
I tried starting more instances, restarting the instances I had running. I put my initial script in a loop and ran it on repeat while I did this looking for any sign of the Reservations array actually having data.
I double checked that my aws ACCESS_KEY and SECRET_KEY are both correct and pointing to the correct account. They are.
I have no clue why this is. Its so simple and should be working. I'm new to AWS so any help is appreciated.

Seems that you forgot to add the region.
Set the region when creating your client
ec2_client = boto3.client(
'ec2',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
region_name=REGION_NAME
)
response = ec2_client.describe_instances()
print(response)
If your EC2 instances are in Oregon, you can do region_name='us-west-2'
Hard coding credentials is not recommended. You can configure your profiles using the awscli and then reference it in your code.
session = boto3.Session(profile_name='dev')
# Any clients created from this session will use credentials
# from the [dev] section of ~/.aws/credentials.
ec2_client = session.client('ec2')
You can read more about Boto3 credentials Boto3 Credentials

The problem is that somehow boto3 was using a region that my instance was not running in. The solution was to specify the region when initializing the client:
ec2_client = boto3.client(
'ec2',
region_name='us-east-2',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY)
All credit to #kichik for telling me where to look!

Related

Windows PowerShell & InfluxDB: Unable to Write Data to Bucket

I am new to InfluxDB. Understand we need to use Windows Powershell to interact with InfluxDB.
Using Python in the shell, I tried to write data to the bucket using the below code
import influxdb_client, os, time
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS
token = os.environ.get("INFLUXDB_TOKEN")
org = "nil"
url = "https://us-west-2-1.aws.cloud2.influxdata.com/"
client = influxdb_client.InfluxDBClient(url=url, token=token, org=org)
bucket="MyBucket"
write_api = client.write_api(write_options=SYNCHRONOUS)
for value in range(5):
point = (
Point("measurement1")
.tag("tagname1", "tagvalue1")
.field("field1", value)
)
write_api.write(bucket=bucket, org="nil", record=point)
time.sleep(1) # separate points by 1 second
But I get the error
influxdb_client.rest.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Date': 'Thu, 29 Dec 2022 01:44:17 GMT', 'Content-Type': 'application/json; charset=utf-8', 'Content-Length': '55', 'Connection': 'keep-alive', 'trace-id': '5374d7ae5df282f4', 'trace-sampled': 'false', 'x-platform-error-code': 'unauthorized', 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains', 'X-Influxdb-Request-ID': '2e52a39e6d7442b5fc7eb7306ef004d4', 'X-Influxdb-Build': 'Cloud'})
HTTP response body: {"code":"unauthorized","message":"unauthorized access"}
401 indicates that there should something wrong with the authorization header. Could you try to enable to debug log as follows to see the details:
client = influxdb_client.InfluxDBClient(url=url, token=token, org=org, debug=True) // see that **debug=True** will enable verbose logging of HTTP requests
Both HTTP request headers and body will be logged to standard output. And please check the HTTP header to see whether there is "Authorization: Token" header, which is similar to
Authorization: 7mexfXXXXXXXXXXXXX
Please double check if the header is missing or malformed.

Gemini API Python {'result': 'error', 'reason': 'InvalidSignature', 'message': 'InvalidSignature'}

I Am getting an error when using the [Gemini API documentation][1] while following the documentation for a Private API invocation.
The json output is:
{'result': 'error', 'reason': 'InvalidSignature', 'message': 'InvalidSignature'}.
My code:
gemini_api_key = getMasterApi()#gets master api key from a json file
gemini_api_secret = getSecretApi().encode()#gets secret api key from a json file
print(gemini_api_secret)
t = datetime.datetime.now()
payload_nonce = str(int(time.mktime(t.timetuple())*1000))
payload = {"request": "/v1/mytrades", "nonce": payload_nonce}
encoded_payload = json.dumps(payload).encode()
b64 = base64.b64encode(encoded_payload)
signature = hmac.new(gemini_api_secret, b64, hashlib.sha384).hexdigest()
request_headers = {
'Content-Type': "text/plain",
'Content-Length': "0",
'X-GEMINI-APIKEY': gemini_api_key,
'X-GEMINI-PAYLOAD': b64,
'X-GEMINI-SIGNATURE': signature,
'Cache-Control': "no-cache"
}
response = requests.post(url, headers=request_headers)
my_trades = response.json()
print(my_trades)
[https://docs.gemini.com/rest-api/#public-api-invocation][1]
I had the same issue and was able to resolve it by creating a new API key with Primary scope (instead of Master scope) and Auditor permissions.
Make sure you're using the right URL. If you made an API key using a sandbox account, you have to change the URL to url = "https://api.sandbox.gemini.com/v1/mytrades".
If you are using the Gemini sandbox, you will need to create your API keys using https://exchange.sandbox.gemini.com/ as opposed to their normal site.
If doing this on your live account and not a sandbox account, when you go to create an API, select 'primary' with 'Fund Management' and 'Trading' permissions. 'Auditor' will not allow you to interact with your funds or place orders.
I had the same issue until I did this.

How to turn on/off CloudSQL instances using Python3

I'm trying to use a Python script to turn on/off a CloudSQL Instance in GoogleCloud. I've finally found a way to do it, using the GoogleCloudAPI, in Shell:
### shell script
ACCESS_TOKEN="$(gcloud auth print-access-token)"
ACTIVATION_POLICY="ALWAYS/NEVER" # Use 'ALWAYS' to turn on, 'NEVER' to turn off
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" --header 'Content-Type: application/json' --data '{"settings" : {"activationPolicy" : "${ACTIVATION_POLICY}"}}' -X PATCH https://www.googleapis.com/sql/v1beta4/projects/${PROJECT_ID}/instances/${INSTANCE_ID}
So, great, problem solved... except I cannot use 'gcloud auth print-access-token' on the machine I'm running the script in, so that solves nothing. I found a question from 2017 trying to generate this 'access-token' using Python as well, which apparently didn't work either.
I need to be able to generate this 'access-token' using Python itself. I've been looking around in the Google's documentation but I still haven't managed to find anything related to that, closest I found was using oauth2 and googleapiclient to get the list of instances running, but can't seem to change activation policies from there:
### python3 script
from google.oauth2 import service_account
import googleapiclient.discovery
SCOPES = ['https://www.googleapis.com/auth/sqlservice.admin']
SERVICE_ACCOUNT_FILE = '/path/to/service.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
sqladmin = googleapiclient.discovery.build('sqladmin', 'v1beta4', credentials=credentials)
response = sqladmin.instances().get(project=PROJECT_ID, instance=INSTANCE_ID).execute()
The documentation doesn't make it clear how to use either tool to turn off the CloudSQL instance though, or at least none that I could find do. The above code returns me a JSON file and I can see the 'activationPolicy' there, under settings. I can't find a way to 'change it' though.
Managed to follow #norbjd suggestion and find a 'patch' method, and gave 'SQL Admin' permission to my credentials, so it can now use the sqladmin API. Tried to patch it using the following code:
instance = sqladmin.instances().patch(project=PROJECT_ID, instance=INSTANCE_ID)
instance.data = {"settings" : {"activationPolicy" : "NEVER"}} #also tried with it in a string, like this: instance.data = {"settings" : {"activationPolicy" : "NEVER"}}
instance.headers['Content-Type'] = 'application/json'
instance.execute()
Considering instance.data didn't exist prior to that, but instance.headers did:
{'accept': 'application/json', 'accept-encoding': 'gzip, deflate', 'user-agent': '(gzip)', 'x-goog-api-client': 'gdcl/1.7.11 gl-python/3.6.9'}
After the execute, though, nothing seemed to happen. It did not change the actual activationPolicy.
In the end, the problem was solved using the ACCESS_TOKEN and making a Python request using the requests module. If you try to get an ACCESS_TOKEN from your credentials just after generating them, you won't be getting any, but if you actually use your credentials with the googleapiclient.discovery, this updates that object with a valid access token, that can then be used in a Python request, as follows:
from google.oauth2 import service_account
import googleapiclient.discovery
import json
import requests
PROJECT_ID = '{PROJECT_ID_HERE}'
INSTANCE_ID = '{INSTANCE_ID_HERE}'
SCOPES = ['https://www.googleapis.com/auth/sqlservice.admin']
SERVICE_ACCOUNT_FILE = '/path/to/service.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
sqladmin = googleapiclient.discovery.build('sqladmin', 'v1beta4', credentials=credentials)
response = sqladmin.instances().get(project=PROJECT_ID, instance=INSTANCE_ID).execute()
print(json.dumps(response))
access_token = credentials.token # Now that the credentials were used, they have a valid access_token associated with them!
activation_policy = 'NEVER' # or 'ALWAYS'
url = "https://www.googleapis.com/sql/v1beta4/projects/{PROJECT_ID}/instances/{INSTANCE_ID}".format(PROJECT_ID=PROJECT_ID, INSTANCE_ID=INSTANCE_ID)
header = {
"Authorization" : "Bearer {}".format(access_token),
"Content-Type" : "application/json"
}
data = {
"settings" : {
"activationPolicy" : activation_policy
}
}
response = requests.patch(url, headers=header, data=json.dumps(data))
print(response.text)
All it took was to actually use the credentials for something before attempting to retrieve the ACCESS_TOKEN

Have a issue with request routed53omains API

There was a problem accessing the routedomains module API.
code:
#!/usr/bin/python3.6
#encoding: utf-8
import boto3
key_id='xxxxxxxx'
access_key='xxxxxxxx'
client = boto3.client(
'route53domains',
aws_access_key_id = key_id,
aws_secret_access_key = access_key,
region_name = 'us-east-1'
)
all_domains= client.list_domains(DomainName="mydomain.com")
print(all_domains)
Run result:
[]
I can confirm that there are more than one domainame under the account(key_id and access_key).
and given sufficient permissions:
AmazonRoute53DomainsFullAccess
AmazonRoute53FullAccess
But response is null.
And request Route53 modules api is not problem:
client = boto3.client(
'route53',
region_name = 'us-east-1',
aws_access_key_id = "xxxxxxxxxxxx",
aws_secret_access_key = "xxxxxxxxxxxx"
)
client.get_paginator('list_hosted_zones')
My other account both Route53 and Route53domain request is work ok !
why? What's the problem? please help me, thanks!
The actual syntax for Route53domains client operation to list all the domains is:
response = client.list_domains(
Marker='string',
MaxItems=123
)
If the number of domains that are associated with the current AWS account is greater than the value that you specified for MaxItems , you can use Marker to return additional domains.
Refer to the boto3 documentation for Route53Domains
thank for your answer.
#!/usr/bin/python3.6
#encoding: utf-8
import boto3
key_id='xxxxxxxx'
access_key='xxxxxxxx'
client = boto3.client(
'route53domains',
aws_access_key_id = key_id,
aws_secret_access_key = access_key,
region_name = 'us-east-1'
)
all_domains= client.list_domains(MaxItems=123)
print(all_domains)
run result:
{'Domains': [], 'ResponseMetadata': {'RequestId': '2aa6e538-66e3-11e8-a89f-273ad6c882c8', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '2aa6e538-66e3-11e8-a89f-273ad6c882c8', 'content-type': 'application/x-amz-json-1.1', 'content-length': '14', 'date': 'Sun, 03 Jun 2018 04:04:09 GMT'}, 'RetryAttempts': 0}}
Domains is []
Sorry, I originally wanted to ask get_domain_detail() function:
Client.get_domain_detail(DomainName="mytest.com")
but found use that list_domains doesn't even have a domainname,
so,written error。
And I use the CMD API tool is still the same result:
./aws route53domains list-domains --max-items 123
{
"Domains": []
}
This may be an aws account or background issue. I don't know why.

Get error 400 when retrieving anything from Google Contacts using Python gdata API

Just started to use gdata (Python). Surprisingly, after setting up credentials correctly, I just get HTTP code 400, whenever I try to retrieve all files. Take a look into this code:
s = 'https://www.google.com/m8/feeds'
s = 'https://www.google.com/m8/feeds/contacts/default/full/'
client_secret = CLIENT_SECRET
client_id = CLIENT_ID
client = gdata.contacts.client.ContactsClient()
auth2token = gdata.gauth.OAuth2Token(
client_id = client_id,
client_secret = client_secret,
scope = s,
user_agent = "bla")
auth2token.authorize(client)
try:
feed = client.get_contacts()
except gdata.client.RequestError as e:
print e.args
print e.body
print e.headers
print e.message
print e.reason
print e.status
Unfortunatelly I always get error, no matter what the value of s is. See the output of this code:
('Server responded with: 400, ',)
[('alternate-protocol', '443:quic,p=0.08'), ('x-xss-protection', '1; mode=block'), ('x-content-type-options', 'nosniff'), ('transfer-encoding', 'chunked'), ('accept-ranges', 'none'), ('expires', 'Fri, 01 Jan 1990 00:00:00 GMT'), ('vary', 'Accept-Encoding'), ('server', 'GSE'), ('pragma', 'no-cache'), ('cache-control', 'no-cache, no-store, max-age=0, must-revalidate'), ('date', 'Tue, 03 Mar 2015 02:12:08 GMT'), ('x-frame-options', 'SAMEORIGIN'), ('content-type', 'application/json')]
Server responded with: 400,
Bad Request
400
Any clue?
Thanks!
I suspect the issue may be in the re-authentication i.e. this flow:
you allow your app access
maybe you didn't save the tokens
If you go to allow the already permitted app access again this time saving the tokens.
Tokens using the tokens from (3) give me a 400 error. However, if I:
Go to my google account and unlink the connection between my local app and my contacts
Delete all tokens locally, and then re-authenticate it seems to work fine.

Categories

Resources