Boto3: Empty Datapoint from AWS instance - python

I wanted to write a Python script which will fetch my EC2 CPU utilization. While using get_metric_statistics() method, the output response shows like this .
As far as I know the 'Datapoints:[]' array shouldn't be empty. It should return something to get the CPU load %. My code is
import boto3
import sys
import datetime
client = boto3.client('cloudwatch')
response = client.get_metric_statistics(
Namespace='AWS/EC2',
MetricName='CPUUtilization',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'i-***********'
},
],
StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=600),
EndTime=datetime.datetime.utcnow(),
Period=120,
Statistics=[
'Average',
],
Unit='Percent'
)
print(response)
Any help on what is wrong here? Thanks!

To use percentile statistics you must first enable detailed monitoring.
Amazon EC2 Metrics and Dimensions

By default, it takes the default region, which is "ap-southeast-1" (it's in the .aws/config). In the .aws/credentials file, it automatically takes the secret key & access key that was used while configuring the api. If they conflict, the datapoint returns empty.
Solution:
1. Create a new profile for every region. configure the region in .aws/config. Mine looks like this
`[default]
region = ap-southeast-1
[profile nv]
region = us-east-1
[profile prod]
region = us-east-1
~ `
nv and prod is usermade profilenames.
Enter the name instead of default
session = boto3.Session(profile_name='default')
Enter the secret key & access key in the same way in .aws/credentials.
You can use aws configure --your_profile_name to create the profiles inside the config & credential files

Related

boto3 : SSM Parameter get_parameters()

I am creating lambda function where I am fetching SSM parameter for EKS-Optimized AMI ID, now about EKS-Optimized AMI, it is the default AMI provided by EKS if we are not specifying any AMI explicitly. EKS-Optimized AMI are different per region & K8 version. I am working on upgrading this AMI on node groups & getting this AMI ID here for K8 version 1.21. I want to pass this k8 version ${EKS_VERSION} to get-parameter() as
ami_id = ssm.get_parameters(Names=["/aws/service/eks/optimized-ami/${EKS_VERSION}/amazon-linux-2/recommended/image_id"])
Can you help me if we can do this in boto3, if yes,how ?
Thanks in advance!
Maybe I am missing the point of the question but it is pretty straightforward as you already have the request in your question. If you put the following code into your lambda, it should get you the version you want in that region.
For something like this, you may want to use a lambda env variable with a default, and overwrite it when you want something different.
import boto3
import ssm
# get an ssm client
ssm_client = boto3.client('ssm')
# you need to pass the var somehow, here assuming you are using an environment variable in your lambda. You could use some other system to trigger and pass the information to your lambda, e.g. sns
eks_version = os.getenv('EKS_VERSION')
# set the parameter name you want to receive, note the f-string to pass the variable to it
param_name = f"/aws/service/eks/optimized-ami/{eks_version}/amazon-linux-2/recommended/image_id"
# get_parameter
response = ssm_client.get_parameters(Names=[param_name])
# print / return response
print(response)
For overwriting the param, you could use sns or cloudwatch with lambda if you are building some kind of automation but you would need to parse the input from them.
For example a simple json payload in sns
{
"eks_version": 1.21
}
and in your code, you can change make small adjustment once you parsed the sns payload. e.g.
import json
if 'Sns' in the event:
sns_eks_version = json.loads(event['Records'][0]['Sns']['Message']['eks_version'])
else:
sns_eks_version = None
eks_version = sns_eks_version or os.get_env('EKS_VERSION')
This is how I did :
import json
import os
import boto3
ssm_client = boto3.client('ssm')
eks_client = boto3.client('eks')
eksClusterName='dev-infra2-eks'
def lambda_handler(event, context):
# Get current EKS Version
response = eks_client.describe_cluster(
name = eksClusterName
)
eksVersion = response['cluster']['version']
aws_eks_ami_ssm_param = "/aws/service/eks/optimized-ami/"+eksVersion+"/amazon-linux-2/recommended/image_id"
# Get SSM param for AMI ID
try:
eks_ssm_ami = ssm_client.get_parameter(Name=aws_eks_ami_ssm_param)
latest_ami_id = eks_ssm_ami['Parameter']['Value']
return latest_ami_id
except client.exceptions.ParameterNotFound:
logging.error("Parameter Not Found")

How do I get the GCE vm instance id using python?

I am using python. I have the correct project and vm instance names. So I can query Google Cloud metrics just fine. But now I need to query some Agent metrics, but it needs the instance id of my vm instead of the name. What is the simplest way for me to get the instance id of my vm with a query?
Sorry, I should be more clear. Here is my sample code:
results = client.list_time_series(
request={
"name": project_name,
"filter": filter,
"interval": interval,
"view": monitoring_v3.ListTimeSeriesRequest.TimeSeriesView.FULL,
}
)
I want to make a query similar to this. Any simple filter I can use, or something else, that will get me the instance_id of a particular instance name?
If you are inside the gce vm you use the Metadata server
import requests
metadata_server = "http://metadata/computeMetadata/v1/instance/"
metadata_flavor = {'Metadata-Flavor' : 'Google'}
gce_id = requests.get(metadata_server + 'id', headers = metadata_flavor).text
gce_name = requests.get(metadata_server + 'hostname', headers = metadata_flavor).text
gce_machine_type = requests.get(metadata_server + 'machine-type', headers = metadata_flavor).text
If you are looking to list gce vms check the example in the GCP documentation for using client libraries: Listing Instances

Can I get region value for a given instance using boto 3 by passing on just private IP of ec2-instance

Trying to get AWS Region for particular instance. Is that possible that by passing on only ec2 instance ip to get to know its region ?
What I tried:
import boto3
client = boto3.client('s3') # example client, could be any
client.meta.region_name
but it showing same region for all servers..
Unfortunately, there is no native cross-region get_instance_by_private_ip API available. But, you can do something like this
import boto3
def find_region_by_private_ip_address(ip):
ec2 = boto3.resource('ec2', 'us-east-1')
regions = [r['RegionName'] for r in ec2.meta.client.describe_regions()['Regions']]
for region in regions:
ec2 = boto3.resource('ec2', region)
instance_iterator = ec2.instances.filter(
Filters=[
{
'Name': 'private-ip-address',
'Values': [
ip
]
},
]
)
instance_list = list(instance_iterator)
if len(instance_list) > 0:
return region
If performance is critical, you can do multi-threading or multi-processing to query regions in parallel.

Creating VMs from Instance Template on Cloud Function via API call

The code I've written seems to be what I need, however it doesn't work and I get a 401 error (authentication) I've tried everything: 1. Service account permissions 2. create secret id and key (not sure how to use those to get access token though) 3. Basically, tried everything for the past 2 days.
import requests
from google.oauth2 import service_account
METADATA_URL = 'http://metadata.google.internal/computeMetadata/v1/'
METADATA_HEADERS = {'Metadata-Flavor': 'Google'}
SERVICE_ACCOUNT = [NAME-OF-SERVICE-ACCOUNT-USED-WITH-CLOUD-FUNCTION-WHICH-HAS-COMPUTE-ADMIN-PRIVILEGES]
def get_access_token():
url = '{}instance/service-accounts/{}/token'.format(
METADATA_URL, SERVICE_ACCOUNT)
# Request an access token from the metadata server.
r = requests.get(url, headers=METADATA_HEADERS)
r.raise_for_status()
# Extract the access token from the response.
access_token = r.json()['access_token']
return access_token
def start_vms(request):
request_json = request.get_json(silent=True)
request_args = request.args
if request_json and 'number_of_instances_to_create' in request_json:
number_of_instances_to_create = request_json['number_of_instances_to_create']
elif request_args and 'number_of_instances_to_create' in request_args:
number_of_instances_to_create = request_args['number_of_instances_to_create']
else:
number_of_instances_to_create = 0
access_token = get_access_token()
address = "https://www.googleapis.com/compute/v1/projects/[MY-PROJECT]/zones/europe-west2-b/instances?sourceInstanceTemplate=https://www.googleapis.com/compute/v1/projects/[MY-PROJECT]/global/instanceTemplates/[MY-INSTANCE-TEMPLATE]"
headers = {'token': '{}'.format(access_token)}
for i in range(1,number_of_instances_to_create):
data = {'name': 'my-instance-{}'.format(i)}
r = requests.post(address, data=data, headers=headers)
r.raise_for_status()
print("my-instance-{} created".format(i))
Any advice/guidance? If someone could tell me how to get an access token using secret Id and key. Also, I'm not too sure if OAuth2.0 will work because I essentially want to turn these machines on, and they do some processing and then self destruct. So there is no user involvement to allow access. If OAuth2.0 is the wrong way to go about it, what else can I use?
I tried using gcloud, but subprocess'ing gcloud commands aren't recommended.
I did something similar to this, though I used the Node 10 Firebase Functions runtime, but should be very similar never-the-less.
I agree that OAuth is not the correct solution since there is no user involved.
What you need to use is 'Application Default Credentials' which is based on the permissions available to your cloud functions' default service account which will be the one labelled as "App Engine default service account" here:
https://console.cloud.google.com/iam-admin/serviceaccounts?folder=&organizationId=&project=[YOUR_PROJECT_ID]
(For my project that service account already had the permissions necessary for starting and stopping GCE instances, but for other API's I have grant it permissions manually.)
ADC is for server-to-server API calls. To use it I called google.auth.getClient (of the Google APIs Auth Library) with just the scope, ie. "https://www.googleapis.com/auth/cloud-platform".
This API is very versatile in that it returns whatever credentials you need, so when I am running on cloud functions it returns a 'Compute' object and when I'm running in the emulator it gives me a "UserRefreshClient" object.
I then include that auth object in my call to compute.instances.insert() and compute.instances.stop().
Here the template I used for testing my code...
{
name: 'base',
description: 'Temporary instance used for testing.',
tags: { items: [ 'test' ] },
machineType: `zones/${zone}/machineTypes/n1-standard-1`,
disks: [
{
autoDelete: true, // you will want this!
boot: true,
type: 'PERSISTENT',
initializeParams: {
diskSizeGb: '10',
sourceImage: "projects/ubuntu-os-cloud/global/images/ubuntu-minimal-1804-bionic-v20190628",
}
}
],
networkInterfaces: [
{
network: `https://www.googleapis.com/compute/v1/projects/${projectId}/global/networks/default`,
accessConfigs: [
{
name: 'External NAT',
type: 'ONE_TO_ONE_NAT'
}
]
}
],
}
Hope that helps.
If you’re getting a 401 error that means that the access token you're using is either expired or invalid.
This guide will be able to show you how to request OAuth 2.0 access tokens and make API calls using a Service Account: https://developers.google.com/identity/protocols/OAuth2ServiceAccount
The .json file mentioned is the private key you create in IAM & Admin under your service account.

Boto3 not copying snapshot to other regions, other options?

[Very new to AWS]
Hi,
I am trying to move my EBS volume snapshot copies across regions. I have been trying to use Boto3 to move the snapshots. My objective is to move the latest snapshot from us-east-2 region to us-east-1 region automatically on a daily basis.
I have used aws configure command in terminal to setup my security credentials and set region to us-east-2.
I am using pandas to acquire the most recent snapshot-id using this code:
import boto3
import pandas as pd
from pandas.io.json.normalize import nested_to_record
import boto.ec2
client = boto3.client('ec2')
aws_api_response = client.describe_snapshots(OwnerIds=['self'])
flat = nested_to_record(aws_api_response)
df = pd.DataFrame.from_dict(flat)
df= df['Snapshots'].apply(pd.Series)
insert_snap = df.loc[df['StartTime'] == max(df['StartTime']),'SnapshotId']
insert_snap = insert_snap.reset_index(drop=True)
insert_snap returns a snapshot id something like snap-1234ABCD
I am try to use this code to move the snap shot from us-east-2 to us-east-1:
client.copy_snapshot(SourceSnapshotId='%s' %insert_snap[0],
SourceRegion='us-east-2',
DestinationRegion='us-east-1',
Description='This is my copied snapshot.')
The snapshot is copying in the same region using the above line.
I have also tried switching regions through aws configure command in terminal, with the same issue occurring where snapshot is being copied in the same region.
There is a bug in Boto3 that is skipping the destination parameter in the copy_snapshot() code. Information found here: https://github.com/boto/boto3/issues/886
I have tried inserting this code with into the lambda manager but keep getting error "errorMessage": "Unable to import module 'lambda_function'":
region = 'us-east-2'
ec = boto3.client('ec2',region_name=region)
def lambda_handler(event, context):
response=ec.copy_snapshot(SourceSnapshotId='snap-xxx',
SourceRegion=region,
DestinationRegion='us-east-1',
Description='copied from Ohio')
print (response)
I am out of options, what I can do to automate the transfer of snapshots in aws?
As per CopySnapshot - Amazon Elastic Compute Cloud:
CopySnapshot sends the snapshot copy to the regional endpoint that you send the HTTP request to, such as ec2.us-east-1.amazonaws.com (in the AWS CLI, this is specified with the --region parameter or the default region in your AWS configuration file).
Therefore, you should send the copy_snapshot() command to us-east-1, with the Source Region set to us-east-2.
If you wish to move the most recent snapshot, you could run:
import boto3
SOURCE_REGION = 'us-east-2'
DESTINATION_REGION = 'us-east-1'
# Connect to EC2 in Source region
source_client = boto3.client('ec2', region_name=SOURCE_REGION)
# Get a list of all snapshots, then sort them
snapshots = source_client.describe_snapshots(OwnerIds=['self'])
snapshots_sorted = sorted([(s['SnapshotId'], s['StartTime']) for s in snapshots['Snapshots']], key=lambda k: k[1])
latest_snapshot = snapshots_sorted[-1][0]
print ('Latest Snapshot ID is ' + latest_snapshot)
# Connect to EC2 in Destination region
destination_client = boto3.client('ec2', region_name=DESTINATION_REGION)
# Copy the snapshot
response = destination_client.copy_snapshot(
SourceSnapshotId=latest_snapshot,
SourceRegion=SOURCE_REGION,
Description='This is my copied snapshot'
)
print ('Copied Snapshot ID is ' + response['SnapshotId'])

Categories

Resources