To follow up on this question:
Filter CloudWatch Logs to extract Instance ID
I think it leaves the question incomplete because it does not say how to access the event object with python.
My goal is to:
read the instance that was triggered by a change in running state
get a tag value associated with the instance
start all other instances that have the same tag
The Cloudwatch trigger event is:
{
"source": [
"aws.ec2"
],
"detail-type": [
"EC2 Instance State-change Notification"
],
"detail": {
"state": [
"running"
]
}
}
I can see examples like this:
def lambda_handler(event, context):
# here I want to get the instance tag value
# and set the tag filter based on the instance that
# triggered the event
filters = [{
'Name': 'tag:StartGroup',
'Values': ['startgroup1']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
instances = ec2.instances.filter(Filters=filters)
I can see the event object but I don't see how to drill down into the tag of the instance that had it's state changed to running.
Please, what is the object attribute through which I can get a tag from the triggered instance?
I suspect it is something like:
myTag = event.details.instance-id.tags["startgroup1"]
The event data passed to Lambda contains the Instance ID.
You then need to call describe_tags() to retrieve a dictionary of the tags.
import boto3
client = boto3.client('ec2')
client.describe_tags(Filters=[
{
'Name': 'resource-id',
'Values': [
event['detail']['instance-id']
]
}
]
)
In the Details Section of the Event, you will get the instance Id's. Using the instance id and AWS SDK you can query the tags. The following is the sample event
{
"version": "0",
"id": "ee376907-2647-4179-9203-343cfb3017a4",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2015-11-11T21:30:34Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
],
"detail": {
"instance-id": "i-abcd1111",
"state": "running"
}
}
This is what I came up with...
Please let me know how it can be done better. Thanks for the help.
# StartMeUp_Instances_byOne
#
# This lambda script is triggered by a CloudWatch Event, startGroupByInstance.
# Every evening a separate lambda script is launched on a schedule to stop
# all non-essential instances.
#
# This script will turn on all instances with a LaunchGroup tag that matches
# a single instance which has been changed to the running state.
#
# To start all instances in a LaunchGroup,
# start one of the instances in the LaunchGroup and wait about 5 minutes.
#
# Costs to run: approx. $0.02/month
# https://s3.amazonaws.com/lambda-tools/pricing-calculator.html
# 150 executions per month * 128 MB Memory * 60000 ms Execution Time
#
# Problems: talk to chrisj
# ======================================
# test system
# this is what the event object looks like (see below)
# it is configured in the test event object with a specific instance-id
# change that to test a different instance-id with a different LaunchGroup
# { "version": "0",
# "id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
# "detail-type": "EC2 Instance State-change Notification",
# "source": "aws.ec2",
# "account": "999999999999999",
# "time": "2015-11-11T21:30:34Z",
# "region": "us-east-1",
# "resources": [
# "arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
# ],
# "detail": {
# "instance-id": "i-0aad9474", # <---------- chg this
# "state": "running"
# }
# }
# ======================================
import boto3
import logging
import json
ec2 = boto3.resource('ec2')
def get_instance_LaunchGroup(iid):
# When given an instance ID as str e.g. 'i-1234567',
# return the instance LaunchGroup.
ec2 = boto3.resource('ec2')
ec2instance = ec2.Instance(iid)
thisTag = ''
for tags in ec2instance.tags:
if tags["Key"] == 'LaunchGroup':
thisTag = tags["Value"]
return thisTag
# this is the entry point for the cloudwatch trigger
def lambda_handler(event, context):
# get the instance id that triggered the event
thisInstanceID = event['detail']['instance-id']
print("instance-id: " + thisInstanceID)
# get the LaunchGroup tag value of the thisInstanceID
thisLaunchGroup = get_instance_LaunchGroup(thisInstanceID)
print("LaunchGroup: " + thisLaunchGroup)
if thisLaunchGroup == '':
print("No LaunchGroup associated with this InstanceID - ending lambda function")
return
# set the filters
filters = [{
'Name': 'tag:LaunchGroup',
'Values': [thisLaunchGroup]
},
{
'Name': 'instance-state-name',
'Values': ['stopped']
}
]
# get the instances based on the filter, thisLaunchGroup and stopped
instances = ec2.instances.filter(Filters=filters)
# get the stopped instance IDs
stoppedInstances = [instance.id for instance in instances]
# make sure there are some instances not already started
if len(stoppedInstances) > 0:
startingUp = ec2.instances.filter(InstanceIds=stoppedInstances).start()
print ("Finished launching all instances for tag: " + thisLaunchGroup)
So, here's how I got the tags in my Python code for my Lambda function.
ec2 = boto3.resource('ec2')
instance = ec2.Instance(instanceId)
# get image_id from instance-id
imageId = instance.image_id
print(imageId)
for tags in instance.tags:
if tags["Key"] == 'Name':
newName = tags["Value"] + ".mydomain.com"
print(newName)
So, using instance.tags and then checking the "Key" matching my Name tags and pulling out the "Value" for creating the FQDN (Fully Qualified Domain Name).
Related
I have namespace already created and defined tags to resources. When I try adding new tags to the resources, the old tags are getting deleted.
As I would like to use the old data and return the value along with the new tags. Please help me with how I can achieve this.
get volume details from a specific compartment
import oci
config = oci.config.from_file("~/.oci/config")
core_client = oci.core.BlockstorageClient(config)
get_volume_response = core_client.get_volume(
volume_id="ocid1.test.oc1..<unique_ID>EXAMPLE-volumeId-Value")
# Get the data from response
print(get_volume_response.data)
output
{
"availability_domain": "eto:PHX-AD-1",
"compartment_id": "ocid1.compartment.oc1..aaaaaaaapmj",
"defined_tags": {
"OMCS": {
"CREATOR": "xyz#gmail.com"
},
"Oracle-Tags": {
"CreatedBy": "xyz#gmail.com",
"CreatedOn": "2022-07-5T08:29:24.865Z"
}
},
"display_name": "test_VG",
"freeform_tags": {},
"id": "ocid1.volumegroup.oc1.phx.abced",
"is_hydrated": null,
"lifecycle_state": "AVAILABLE",
"size_in_gbs": 100,
"size_in_mbs": 102400,
"source_details": {
"type": "volumeIds",
"volume_ids": [
"ocid1.volume.oc1.phx.xyz"
]
}
I want the API below to update the tag along with the old data.
old tag
"defined_tags": {
"OMCS": {
"CREATOR": "xyz#gmail.com"
},
"Oracle-Tags": {
"CreatedBy": "xyz#gmail.com",
"CreatedOn": "2022-07-5T08:29:24.865Z"
import oci
config = oci.config.from_file("~/.oci/config")
core_client = oci.core.BlockstorageClient(config)
update_volume_response = core_client.update_volume(
volume_id="ocid1.test.oc1..<unique_ID>EXAMPLE-volumeId-Value",
update_volume_details=oci.core.models.UpdateVolumeDetails(
defined_tags={
'OMCS':{
'INSTANCE': 'TEST',
'COMPONENT': 'temp1.mt.exy.vcn.com'
}
},
display_name = "TEMPMT01"))
print(update_volume_response.data)
I also tried but got an attribute error.
for tag in get_volume_response.data:
def_tag.appened(tag.defined_tags)
return (def_tag)
Please help on how can I append the defined_tags?
tags are defined as dict in OCI. Append works the same way as in appending dict.
Below I have pasted the code for updating the defined_tags for Block Volumes in OCI
import oci
from oci.config import from_file
configAPI = from_file() # Config file is read from user's home location i.e., ~/.oci/config
core_client = oci.core.BlockstorageClient(configAPI)
get_volume_response = core_client.get_volume(
volume_id="ocid1.volume.oc1.ap-hyderabad-1.ameen")
# Get the data from response
volume_details = get_volume_response.data
defined_tags = getattr(volume_details, "defined_tags")
freeform_tags = getattr(volume_details, "freeform_tags")
# Add new tags as required. As defined_tags is a dict, addition of new key/value pair works like below.
# In case there are multiple tags to be added then use update() method of dict.
defined_tags["OMCS"]["INSTANCE"] = "TEST"
defined_tags["OMCS"]["COMPONENT"] = "temp1.mt.exy.vcn.com"
myJson={"freeform_tags":freeform_tags,"defined_tags": defined_tags}
update_volume_response = core_client.update_volume(
volume_id="ocid1.volume.oc1.ap-hyderabad-1.ameen",
update_volume_details=oci.core.models.UpdateVolumeDetails(
defined_tags=defined_tags,
freeform_tags=freeform_tags))
print(update_volume_response.data)
I'm trying to create a DynamoDB table using a CloudFormation stack, however I keep receiving the 'CREATE_FAILED' error in the AWS console and I'm not sure where I'm going wrong.
My method to create_stack:
cf = boto3.client('cloudformation')
stack_name = 'teststack'
with open('dynamoDBTemplate.json') as json_file:
template = json.load(json_file)
template = str(template)
try:
response = cf.create_stack(
StackName = stack_name,
TemplateBody = template,
TimeoutInMinutes = 123,
ResourceTypes = [
'AWS::DynamoDB::Table',
],
OnFailure = 'DO_NOTHING',
EnableTerminationProtection = True
)
print(response)
except ClientError as e:
print(e)
And here is my JSON file:
{
"AWSTemplateFormatVersion":"2010-09-09",
"Resources":{
"myDynamoDBTable":{
"Type":"AWS::DynamoDB::Table",
"Properties":{
"AttributeDefinitions":[
{
"AttributeName":"Filename",
"AttributeType":"S"
},
{
"AttributeName":"Positive Score",
"AttributeType":"S"
},
{
"AttributeName":"Negative Score",
"AttributeType":"S"
},
{
"AttributeName":"Mixed Score",
"AttributeType":"S"
}
],
"KeySchema":[
{
"AttributeName":"Filename",
"KeyType":"HASH"
}
],
"ProvisionedThroughput":{
"ReadCapacityUnits":"5",
"WriteCapacityUnits":"5"
},
"TableName":"testtable"
}
}
}
}
My console prints the created stack but there is no clear indication in the console as to why it keeps failing.
Take a look at the Events tab for your stack. It will show you the detailed actions and explain which step first failed. Specifically it will tell you:
One or more parameter values were invalid: Number of attributes in KeySchema does not exactly match number of attributes defined in AttributeDefinitions (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 12345; Proxy: null)
The problem is that you have provided definitions for all of your table attributes. You should only provide the key attributes.
Per the AttributeDefinitions documentation:
[AttributeDefinitions is] A list of attributes that describe the key schema for the table and indexes.
My task: Parse output from "aws ec2 describe-instances" json output to gather various instance details including the "Name" tag assigned to the instance.
My Code excerpt:
# Query AWS ec2 for instance information
my_aws_instances = subprocess.check_output("/home/XXXXX/.local/bin/aws ec2 describe-instances --region %s --profile %s" % (region, current_profile), shell=True)
# Convert AWS json to python dictionary
my_instance_dict = json.loads(my_aws_instances)
# Pre-enter the 'Reservations' top level dictionary to make 'if' statement below cleaner.
my_instances = my_instance_dict['Reservations']
if my_instances:
for my_instance in my_instances:
if 'PublicIpAddress' in my_instance['Instances'][0]:
public_ip = my_instance['Instances'][0]['PublicIpAddress']
else:
public_ip = "None"
if 'PrivateIpAddress' in my_instance['Instances'][0]:
private_ip = my_instance['Instances'][0]['PrivateIpAddress']
else:
private_ip = "None"
if 'Tags' in my_instance['Instances'][0]:
tag_list = my_instance['Instances'][0]['Tags']
for tag in tag_list:
my_tag = tag.get('Key')
if my_tag == "Name":
instance_name = tag.get('Value')
else:
instance_name = "None"
state = my_instance['Instances'][0]['State']['Name']
instance_id = my_instance['Instances'][0]['InstanceId']
instance_type = my_instance['Instances'][0]['InstanceType']
Here's an example of what is contained in the "tag" variable as it loops. This is a python dictionary:
{'Value': 'server_name01.domain.com', 'Key': 'Name'}
If it helps, this is the raw json for the instance tags:
"Tags": [
{
"Value": "migration test",
"Key": "Name"
}
],
Everything is working except for the "Tags" section which works in some cases and doesn't work in others, even though the "Name" value which I'm testing for exists in all cases. In other words I'm getting "None" as a result on some instances that do indeed have a "Name" tag and a name.
I've ruled out problems with the server names themselves i.e. spaces and special chars screwing with the result.
I've also tried to make sure that python is looking for exactly "Name" and no other variations.
I'm perplexed at this point and any help would be appreciated.
Thanks in advance
You have a logic problem here:
for tag in tag_list:
my_tag = tag.get('Key')
if my_tag == "Name":
instance_name = tag.get('Value')
else:
instance_name = "None"
Assume you have an instance with two tags,
[
{
"Key": "Name",
"Value": "My_Name"
},
{
"Key": "foo",
"Value": "bar"
}
]
When it iterates through the for loop, it will first evaluate the Name: My_Name key-value pair and set instance_name to My_Name, however the for loop will continue running, and when it evaluates the second key-value pair it will set the instance_name to None, overwriting the previously assigned value.
One easy solution would be to exit the for loop upon finding the Name key, e.g.:
for tag in tag_list:
my_tag = tag.get('Key')
if my_tag == "Name":
instance_name = tag.get('Value')
break
else:
instance_name = "None"
I am starting to use aws sagemaker on the development of my machine learning model and I'm trying to build a lambda function to process the responses of a sagemaker labeling job. I already created my own lambda function but when I try to read the event contents I can see that the event dict is completely empty, so I'm not getting any data to read.
I have already given enough permissions to the role of the lambda function. Including:
- AmazonS3FullAccess.
- AmazonSagemakerFullAccess.
- AWSLambdaBasicExecutionRole
I've tried using this code for the Post-annotation Lambda (adapted for python 3.6):
https://docs.aws.amazon.com/sagemaker/latest/dg/sms-custom-templates-step2-demo1.html#sms-custom-templates-step2-demo1-post-annotation
As well as this one in this git repository:
https://github.com/aws-samples/aws-sagemaker-ground-truth-recipe/blob/master/aws_sagemaker_ground_truth_sample_lambda/annotation_consolidation_lambda.py
But none of them seemed to work.
For creating the labeling job I'm using boto3's functions for sagemaker:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_labeling_job
This is the code i have for creating the labeling job:
def create_labeling_job(client,bucket_name ,labeling_job_name, manifest_uri, output_path):
print("Creating labeling job with name: %s"%(labeling_job_name))
response = client.create_labeling_job(
LabelingJobName=labeling_job_name,
LabelAttributeName='annotations',
InputConfig={
'DataSource': {
'S3DataSource': {
'ManifestS3Uri': manifest_uri
}
},
'DataAttributes': {
'ContentClassifiers': [
'FreeOfAdultContent',
]
}
},
OutputConfig={
'S3OutputPath': output_path
},
RoleArn='arn:aws:myrolearn',
LabelCategoryConfigS3Uri='s3://'+bucket_name+'/config.json',
StoppingConditions={
'MaxPercentageOfInputDatasetLabeled': 100,
},
LabelingJobAlgorithmsConfig={
'LabelingJobAlgorithmSpecificationArn': 'arn:image-classification'
},
HumanTaskConfig={
'WorkteamArn': 'arn:my-private-workforce-arn',
'UiConfig': {
'UiTemplateS3Uri':'s3://'+bucket_name+'/templatefile'
},
'PreHumanTaskLambdaArn': 'arn:aws:lambda:us-east-1:432418664414:function:PRE-BoundingBox',
'TaskTitle': 'Title',
'TaskDescription': 'Description',
'NumberOfHumanWorkersPerDataObject': 1,
'TaskTimeLimitInSeconds': 600,
'AnnotationConsolidationConfig': {
'AnnotationConsolidationLambdaArn': 'arn:aws:my-custom-post-annotation-lambda'
}
}
)
return response
And this is the one i have for the lambda function:
print("Received event: " + json.dumps(event, indent=2))
print("event: %s"%(event))
print("context: %s"%(context))
print("event headers: %s"%(event["headers"]))
parsed_url = urlparse(event['payload']['s3Uri']);
print("parsed_url: ",parsed_url)
labeling_job_arn = event["labelingJobArn"]
label_attribute_name = event["labelAttributeName"]
label_categories = None
if "label_categories" in event:
label_categories = event["labelCategories"]
print(" Label Categories are : " + label_categories)
payload = event["payload"]
role_arn = event["roleArn"]
output_config = None # Output s3 location. You can choose to write your annotation to this location
if "outputConfig" in event:
output_config = event["outputConfig"]
# If you specified a KMS key in your labeling job, you can use the key to write
# consolidated_output to s3 location specified in outputConfig.
kms_key_id = None
if "kmsKeyId" in event:
kms_key_id = event["kmsKeyId"]
# Create s3 client object
s3_client = S3Client(role_arn, kms_key_id)
# Perform consolidation
return do_consolidation(labeling_job_arn, payload, label_attribute_name, s3_client)
I've tried debugging the event object with:
print("Received event: " + json.dumps(event, indent=2))
But it just prints an empty dictionary: Received event: {}
I expect the output to be something like:
#Content of an example event:
{
"version": "2018-10-16",
"labelingJobArn": <labelingJobArn>,
"labelCategories": [<string>], # If you created labeling job using aws console, labelCategories will be null
"labelAttributeName": <string>,
"roleArn" : "string",
"payload": {
"s3Uri": <string>
}
"outputConfig":"s3://<consolidated_output configured for labeling job>"
}
Lastly, when I try yo get the labeling job ARN with:
labeling_job_arn = event["labelingJobArn"]
I just get a KeyError (which makes sense because the dictionary is empty).
I am doing the same but in Labeled object section I am getting failed result and inside my output objects I am getting following error from Post Lambda function:
"annotation-case0-test3-metadata": {
"retry-count": 1,
"failure-reason": "ClientError: The JSON output from the AnnotationConsolidationLambda function could not be read. Check the output of the Lambda function and try your request again.",
"human-annotated": "true"
}
}
I found the problem, I needed to add the ARN of the role used by my Lamda function as a Trusted Entity on the Role used for the Sagemaker Labeling Job.
I just went to Roles > MySagemakerExecutionRole > Trust Relationships and added:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::xxxxxxxxx:role/My-Lambda-Role",
...
],
"Service": [
"lambda.amazonaws.com",
"sagemaker.amazonaws.com",
...
]
},
"Action": "sts:AssumeRole"
}
]
}
This made it work for me.
I am trying to add a tag to existing ec2 instances using create_tags.
ec2 = boto3.resource('ec2', region_name=region)
instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name',
'Values': ['running']}])
for instance in instances:
ec2.create_tags([instance.id], {"TagName": "TagValue"})
This is giving me this error:
TypeError: create_tags() takes exactly 1 argument (3 given)
First, you CANNOT use boto3.resource("ec2") like that. The boto3.resource is a high level layer that associate with particular resources. Thus following already return the particular instances resources. The collection document always looks like this
# resource will inherit associate instances/services resource.
tag = resource.create_tags(
DryRun=True|False,
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
So in your code,you JUST reference it directly on the resource collection :
for instance in instances:
instance.create_tags(Tags={'TagName': 'TagValue'})
Next, is the tag format, follow the documentation. You get the filter format correct, but not the create tag dict
response = client.create_tags(
DryRun=True|False,
Resources=[
'string',
],
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
On the other hand, boto3.client()are low level client that require an explicit resources ID .
import boto3
ec2 = boto3.client("ec2")
reservations = ec2.describe_instances(
Filters=[{'Name': 'instance-state-name',
'Values': ['running']}])["Reservations"]
mytags = [{
"Key" : "TagName",
"Value" : "TagValue"
},
{
"Key" : "APP",
"Value" : "webapp"
},
{
"Key" : "Team",
"Value" : "xteam"
}]
for reservation in reservations :
for each_instance in reservation["Instances"]:
ec2.create_tags(
Resources = [each_instance["InstanceId"] ],
Tags= mytags
)
(update)
A reason to use resources is code reuse for universal object, i.e., following wrapper let you create tags for any resources.
def make_resource_tag(resource , tags_dictionary):
response = resource.create_tags(
Tags = tags_dictionary)