boto3 : SSM Parameter get_parameters() - python

I am creating lambda function where I am fetching SSM parameter for EKS-Optimized AMI ID, now about EKS-Optimized AMI, it is the default AMI provided by EKS if we are not specifying any AMI explicitly. EKS-Optimized AMI are different per region & K8 version. I am working on upgrading this AMI on node groups & getting this AMI ID here for K8 version 1.21. I want to pass this k8 version ${EKS_VERSION} to get-parameter() as
ami_id = ssm.get_parameters(Names=["/aws/service/eks/optimized-ami/${EKS_VERSION}/amazon-linux-2/recommended/image_id"])
Can you help me if we can do this in boto3, if yes,how ?
Thanks in advance!

Maybe I am missing the point of the question but it is pretty straightforward as you already have the request in your question. If you put the following code into your lambda, it should get you the version you want in that region.
For something like this, you may want to use a lambda env variable with a default, and overwrite it when you want something different.
import boto3
import ssm
# get an ssm client
ssm_client = boto3.client('ssm')
# you need to pass the var somehow, here assuming you are using an environment variable in your lambda. You could use some other system to trigger and pass the information to your lambda, e.g. sns
eks_version = os.getenv('EKS_VERSION')
# set the parameter name you want to receive, note the f-string to pass the variable to it
param_name = f"/aws/service/eks/optimized-ami/{eks_version}/amazon-linux-2/recommended/image_id"
# get_parameter
response = ssm_client.get_parameters(Names=[param_name])
# print / return response
print(response)
For overwriting the param, you could use sns or cloudwatch with lambda if you are building some kind of automation but you would need to parse the input from them.
For example a simple json payload in sns
{
"eks_version": 1.21
}
and in your code, you can change make small adjustment once you parsed the sns payload. e.g.
import json
if 'Sns' in the event:
sns_eks_version = json.loads(event['Records'][0]['Sns']['Message']['eks_version'])
else:
sns_eks_version = None
eks_version = sns_eks_version or os.get_env('EKS_VERSION')

This is how I did :
import json
import os
import boto3
ssm_client = boto3.client('ssm')
eks_client = boto3.client('eks')
eksClusterName='dev-infra2-eks'
def lambda_handler(event, context):
# Get current EKS Version
response = eks_client.describe_cluster(
name = eksClusterName
)
eksVersion = response['cluster']['version']
aws_eks_ami_ssm_param = "/aws/service/eks/optimized-ami/"+eksVersion+"/amazon-linux-2/recommended/image_id"
# Get SSM param for AMI ID
try:
eks_ssm_ami = ssm_client.get_parameter(Name=aws_eks_ami_ssm_param)
latest_ami_id = eks_ssm_ami['Parameter']['Value']
return latest_ami_id
except client.exceptions.ParameterNotFound:
logging.error("Parameter Not Found")

Related

Only get new data from DynamoDB using Python

I'm trying to export data from a DynamoDB transaction table using Python. Until now I was able to get all the data from the table but I would like to add a filter that allows me to only get the data from a certain date until today.
There is a field called CreatedAt that indicates the time when the transaction was made, I was thinking of using this field to filter the new data.
This is the code I've been using to query the table, it would be really helpful if anyone can tell me how to apply this filter into this script.
import pandas as pd
from boto3.dynamodb.conditions
aws_access_key_id = '*****'
aws_secret_access_key = '*****'
region='****'
dynamodb = boto3.resource(
'dynamodb',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=region
)
transactions_table = dynamodb.Table('transactions_table')
result = transactions_table.scan()
items = result['Items']
df_transactions_table = pd.json_normalize(items)
print(df_transactions_table)
Thanks!
Boto3 allows for FilterExpressions as part of a DynamoDB query that will achieve filtering on the field. See here
Optionally using FilterExpressions will still consume the same amount of read capacity units.
You need to use FilterExpression which would look like the following:
import boto3
from boto3.dynamodb.conditions import Key, Attr, And
aws_access_key_id = '*****'
aws_secret_access_key = '*****'
region='****'
dynamodb = boto3.resource(
'dynamodb',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=region
)
transactions_table = dynamodb.Table('transactions_table')
result = transactions_table.scan(
FilterExpression=Attr('CreatedAt').gt('2020-08-10'),
)
items = result['Items']
df_transactions_table = pd.json_normalize(items)
print(df_transactions_table)
You can learn more from the docs on Boto3 Scan and FilterExpression.
Some advice: Please do not hard code your keys the way you have done in this code, use an IAM role. If you are testing locally, configure the AWS CLI which will provide credentials that you can assume when testing, that way you wont make a mistake and share keys on GitHub etc...

To get the SNS Topic name from the SNS Topic ARN using python

I am writing a boto3 script where I have to list the sns topics by name but the problem is that right now I don't think get_topic_name() API is in place and when I tried this method(function) I got the error. Right now the output is coming in the form of arn:aws:{region}:{account_number}:{topic_name} The output is something like (for e.g. arn:aws:ca-central-1:111122223333:sns-test-topic
Can someone tell how to fetch the name only?
This is the code:-
import boto3
from pprint import pprint
account_id = input("Enter the AWS account Id:")
regions = ['ap-northeast-1', 'ap-southeast-1',
'ca-central-1', 'us-east-1', 'us-east-2']
for region in regions:
session = boto3.session.Session()
client = session.client('sns', region_name=region)
for arn in client.list_topics()['Topics']:
print(arn['TopicArn'])
As the name of the topic is always the last field of the ARN, you can use rsplit to extract them:
print(arn['TopicArn'].rsplit(':',1)[1])

Boto3: Empty Datapoint from AWS instance

I wanted to write a Python script which will fetch my EC2 CPU utilization. While using get_metric_statistics() method, the output response shows like this .
As far as I know the 'Datapoints:[]' array shouldn't be empty. It should return something to get the CPU load %. My code is
import boto3
import sys
import datetime
client = boto3.client('cloudwatch')
response = client.get_metric_statistics(
Namespace='AWS/EC2',
MetricName='CPUUtilization',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'i-***********'
},
],
StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=600),
EndTime=datetime.datetime.utcnow(),
Period=120,
Statistics=[
'Average',
],
Unit='Percent'
)
print(response)
Any help on what is wrong here? Thanks!
To use percentile statistics you must first enable detailed monitoring.
Amazon EC2 Metrics and Dimensions
By default, it takes the default region, which is "ap-southeast-1" (it's in the .aws/config). In the .aws/credentials file, it automatically takes the secret key & access key that was used while configuring the api. If they conflict, the datapoint returns empty.
Solution:
1. Create a new profile for every region. configure the region in .aws/config. Mine looks like this
`[default]
region = ap-southeast-1
[profile nv]
region = us-east-1
[profile prod]
region = us-east-1
~ `
nv and prod is usermade profilenames.
Enter the name instead of default
session = boto3.Session(profile_name='default')
Enter the secret key & access key in the same way in .aws/credentials.
You can use aws configure --your_profile_name to create the profiles inside the config & credential files

"EndpointConnectionError : unable to connect to endpoint https://lambda-xyz/wehwk.com

I'm new to lambda, I tried invoking a lambda function with another lambda
using the following script
import boto3
region = 'ca-central-1'
def lambda_handler(event,context):
client.boto3('lambda' , region_name='region')
response = client.invoke(FunctionName='xyz',LogType='None')
print(response)
but im getting "EndpointConnectionError : unable to connect to endpoint https://lambda-xyz/wehwk.com
I also tagged lambda full access policy to this lambda function.
Am I missing something ? Thanks in advance
Instead of client.boto3('lambda' , region_name='region') you should use:
client = boto3.client('lambda')
Observe that you have client and boto3 inverted. Then you can use the client:
response = client.invoke(FunctionName='xyz',LogType='None')
print(response)

Boto to Boto3 function implementation

1) How can i implement this from boto into boto3 code:
conn = boto.connect_ec2() # boto way
sgs = conn.get_all_security_groups() # boto way
for sg in sgs:
if len(sg.instances()) == 0:
print(sg.name, sg.id, len(sg.instances()))
The above code basically prints all Security Groups with no instances attached.
2) And this individual command which uses duct.sh() module :
command = 'aws ec2 describe-instances --filters "Name=instance.group-id,Values=' + sg.id + '\" --query \'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`] | [0].Value]\' --output json'
boto: get_all_security_groups()
boto3: security_group_iterator = ec2.security_groups.all()
However, boto has the .instances() method on boto.ec2.securitygroup.SecurityGroup, whereas boto3 does not have an equivalent method on ec2.SecurityGroup.
Therefore, it looks like you would have to call describe_instances(), passing the security group as a Filter:
response = client.describe_instances(
Filters=[{'Name':'instance.group-id','Values':['sg-abcd1234']}])
This will return a list of instances that use the given security group.
You could then count len(response['Reservations']) to find unused security groups. (Note: This is an easy way to find zero-length responses, but to count the actual instances would require adding up all Reservations.Instances.)

Categories

Resources