i am new to serverless framework and aws, and i need to create a lambda function on python that will send email whenever an ec2 is shut down, but i really don't know how to do it using serverless. So please if any one could help me do that or at least give me some tracks to start with.
You can use CloudWatch for this.
You can create a cloudwatch rule
Service Name - Ec2
Event Type - EC2 Instance change notification
Specific state(s) - shutting-down
Then use an SNS target to deliver email.
Using serverless, you can define the event trigger for your function like this...
functions:
shutdownEmailer:
handler: shutdownEmailer.handler
events:
- cloudwatchEvent:
event:
source:
- "aws.ec2"
detail-type:
- "EC2 Instance State-change Notification"
detail:
state:
- shutting down
enabled: true
Then, you can expect your lambda to be called every time that event happens.
What you want is a CloudWatch Event.
In short, a CloudWatch event is capable of triggering a Lambda function and passing it something like this:
{
"version": "0",
"id": "123-456-abc",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "1234567",
"time": "2015-11-11T21:36:16Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:12312312312:instance/i-abcd4444"
],
"detail": {
"instance-id": "i-abcd4444",
"state": "shutting-down"
}
From there, you can parse this information in your Python code running on Lambda. To get Instance ID of shutting-down instance, you will use something like this:
instance_id = event["detail"]["instance-id"]
Then you can use Amazon SES (Simple Email Service) API with help from official boto3 library and send an email. See: http://boto3.readthedocs.io/en/latest/reference/services/ses.html#SES.Client.send_email
Of course, you will also need a proper IAM role with necessary privileges to use SES attached to your Lambda function. You can make a new one easily on AWS IAM Roles page.
It might seem overwhelming at first, for starters:
go to https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#rules:action=create (if link is broken: AWS Dashboard > CloudWatch > Rules)
Create a new rule.
Under "Event Source" select EC2 as Service Name, and "EC2 Instance State-change Notification" as Event Type.
Click on "Specific States". You can simply select "shutting-down" here but I would also choose "stopping" and "terminated" just to make sure.
Save it, go to Lambda, add this Event in Triggers tab and start writing your code.
Related
I'm trying to debug an AWS Lambda function that's using a Docker image, as described here. I'm using the stock AWS Python image: public.ecr.aws/lambda/python:3.8
I'm able to follow the steps described in the above link to test my function locally and it works just fine:
docker run -p 9000:8080 hello-world, followed by curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' in another Terminal window properly performs the function I'm expecting. However once this is running in Lambda, after successfully tagging the image and pushing it to AWS ECR, the function doesn't seem to be working and I'm not able to find any logs to debug the failed/missing executions.
I'm at a bit of a loss in terms of where these logs are stored, and/or what configuration I may be missing to get these logs into CloudWatch or something similar. Where can I expect to find these logs to further debug my lambda function?
So, there are no technical diferences from working with docker images with lambda compated to the code as zip or in s3. As for the logs, according to AWS documentation (and this is the description directly from the docs):
AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.
You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/.
So, the most basic code would have some sort of logging within your lambda. My suggestion in this case to troubleshoot:
1 - Like in the image bellow, go to your lambda function and try access the cloudwatch logs directly from the console. Make sure to confirm the default region in which your function was deployed.
2 - If the logs exists (the group for the lambda function exists), the check if there are any raise exceptions from your code.
3 - If there are any errors indicating that the group log for cloudwatch doesn't exist or that the group log from the function doesnt exist, then check the configurations from your lambda directly in the console or, if you are using a framework like serverless or cloudwatch, the code structure.
4 - Finally, if everything seems ok this could be only related to one simple thing. User permissions from your account or Role permission from you lambda function (which is mostly the case for these situations).
One thing that you should check is the basic role generated from your lambda, which ensures that you can create new log groups
One policy example should be something like this (You can also add manually the CloudWatch Logs policy, the effect should be similar):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:us-east-1:XXXXXXXXXX:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-1:XXXXXXXXXX:log-group:/aws/lambda/<YOUR-LAMBDA-FUNCTION>r:*"
]
}
]
}
More related to this issue can be found here:
https://aws.amazon.com/pt/premiumsupport/knowledge-center/lambda-cloudwatch-log-streams-error/
I say this because but I have used frequently docker for code dependencies with lambda, based on this first tutorial from when this feature was introduced.
https://aws.amazon.com/pt/blogs/aws/new-for-aws-lambda-container-image-support/
Hopefully this was helpfull!
Feel free to leave additional comments.
For a special case when you are using serverless framework, I had to use the following to get the logs in the cloudwatch.
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event: dict, context: dict) -> dict:
logger.info(json.dumps(event))
# ...
return {'statusCode': 200, 'body': json_str}
For my case, the lambda function runs inside ecr docker container.
I want to get the environment name (dev/qa/prod) where my AWS Lambda function is running or being executed programatically. I don't want to give as part of my environment variables.
How do we do that?
In AWS, everything is in "Production". That is, there is no concept of "AWS for Dev" or "AWS for QA".
It is up to you to create resources that you declare to be Dev, QA or Production. Some people do this in different AWS Accounts, or at least use different VPCs for each environment.
Fortunately, you mention that "each environment in AWS has a different role".
This means that the AWS Lambda function can call get_caller_identity() to obtain "details about the IAM user or role whose credentials are used to call the operation":
import boto3
def lambda_handler(event, context):
sts_client = boto3.client('sts')
print(sts_client.get_caller_identity())
It returns:
{
"UserId": "AROAJK7HIAAAAAJYPQN7E:My-Function",
"Account": "111111111111",
"Arn": "arn:aws:sts: : 111111111111:assumed-role/my-role/My-Function",
...
}
}
Thus, you could extract the name of the Role being used from the Arn.
Your lambda function can do one of the following :
def handler_name(event, context): - read the data from the event dict. The caller will have to add this argument (since I dont know what is the lambda trigger I cant tell if it is a good solution)
Read the data from S3 (or other storage like DB)
Read the data from AWS Systems Manager Parameter Store
I don't want to give as part of my environment variables
Why?
We send up custom metrics to AWS using Python (see existing code below) and separately use the AWS CloudWatch Agent to send up metrics for our EC2 machine. However, we'd like to stop sending the custom metrics through a boto client and instead send them up using the AWS CloudWatch agent.
I've found details on how to send up custom metrics from StatsD and collectd, but it's unclear how to send up your own custom metrics. I'm guessing we'll have to export our metrics in a similar data format to one of these, but it's unclear how to do that. In summary, we need to:
Export the metric in Python to a log file in the right format
Update the AWS CloudWatch Agent to read from those log files and upload the metric
Does anyone have an example that covers that?
Existing Code
import boto3
cloudwatch = boto3.client(
service_name="cloudwatch",
region_name=env["AWS_DEPLOYED_REGION"],
api_version="2010-08-01",
)
cloudwatch.put_metric_data(
Namespace="myNameSpace",
MetricData=[
{
"MetricName": "someName",
"Dimensions": [
{"Name": "Stage", "Value": "..."},
{"Name": "Purpose", "Value": "..."},
],
"Values": values,
"StorageResolution": 60,
"Unit": "someUnit",
},
],
)
CloudWatch Agent supports StatsD or CollectD for collecting custom metrics. There is no support for using the AWS CloudWatch SDK and pointing it to the CW Agent.
To use StatsD or CollectD, you just follow the documentation for that specific tool. Then CloudWatch provide an adapter for both that interface to the CloudWatch Agent as I linked above. This is generally useful for people who already use StatsD or CollectD for custom and application metrics however its clearly painful in your case as you will have to onboard to either or in order to achieve you desired effect.
You can create CloudWatch agent config files in /etc/amazon/amazon-cloudwatch-agent/amazon-cloudwatch-agent.d/ directory.
The config file should be like,
{
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "path_to_log_file/app1.log",
"log_group_name": "/app/custom.log",
"log_stream_name": "{instance_id}"
}
]
}
}
}
}
Restarting the cw agent will consider this configuration automatically.
One more way is to attach config files manually using the command,
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -m ec2 -s -c file:/path_to_json/custom_log.json
This log group will be available in the CloudWatch Logs console.
I'm trying to find a way to make an alexa skill speak(response) when it receives an external non-vocal event. The application receives an event that occurs on amazon-sqs inside a queue. The lambda application is connected and triggers the event that happened through the function:
def lambda_handler(event, context)
At this point I would like the skill alexa to answer me by telling me that the event has occurred. To do this I should create a Json input to send to the skill, I can record the data of event ['session'] when i'll start the skill but how can I pass the data of event ['request'] to launch an intent?
For example this is the json input when i lunch a request by the alexa simulator:
{
"version": "1.0",
"session": {
...
},
"context": {
"...
},
"request": {
...
}
}
I can recreate the session dictionary by saving the data on a dynamodb table, but for the context and request?
Maybe my approch is completely mistaken.
How can i do?
Take a look at:https://developer.amazon.com/docs/smapi/proactive-events-api.html. This is the supported way to do proactive speech and may be sufficient for you
this sounds interesting, I am wondering how are you going to keep the Skill opened or you are trying to get the Skill opened as soon as the event happens?
I understand that once the event happens you need to launch the Skill and then in the LaunchRequest you do the speak(response). The tricky part here is launching/initiating/opening the Skill.
Also you can try with Request and Response Interceptors to catch the whole request and responde.
I will dig a bit if this is possible to do.
Thanks :D
There is a defined conversation flow for my agent. Using some metric, at a certain point within that conversation flow, the agent is supposed to prompt something to the user, the user hasn't said anything, the bot isn't reacting, it is initiating letting something know to the user at this point of time.
So from my web search and previous dialogflow forum discussions, 'Events' seemed to be the tool to be able to do this. So I have a Python Django-based web server which takes in and responds to webhooks from my Dialogflow agent. ANd my chatbot interface is an Android app which I made based on the 'Dialogflow-android-client' sample app present in Dialogflow's github account (https://github.com/dialogflow/dialogflow-android-client). With this setup the basic conversation chatbot is working well. Now I am trying to implement the scenario described in the paragraph above.
Now, to do this I implemented sending a POST /query Request to the Dialogflow agent to invoke an event (as given in the Dialogflow documentation: https://dialogflow.com/docs/events#invoking_event_from_webhook). It is successfully invoking the intent which has that particular event associated with it. On the invocation of that intent using EVENT webhook, I get back a json response from the agent back to my server in the format described in the documentation. But the client end (neither the Android app nor the Dialogflow developer dialog check console) is showing the text or speaking out the Output text that the event triggered intent. I am only getting all JSON info at my server, nothing is going to the client app. Why is this? In previous Dialogflow forum discussions, one experienced person said that with Event triggering through webhook, you only get information back to where you initiated the trigger from and nowhere else.Is that true? At that same place it was said the best way to get the response at the client app for the event trigger is to use FollowupEvent tool instead.
So that's what I am doing, in the webhook response from my server for 1 intent that was activated by something the user said, I am including the FollowupEvent info in the response JSON. But still no response from the event triggered intent comes at the client app. I think in this case the intent to be triggered is not getting triggered only, since if it was triggered, a POST webhook request for that intent would have come to the server, which it has not. Why is it not working? What is the resolution?
Here is the JSON being sent to the Dialogflow agent from my server (using FollowupEvent):
{
"contextOut": [],
"speech": " ",
"displayText": " ",
"source": "PSVA-server",
"followupEvent": {
"event": {
"name": "testr",
"data": {}
}
}
}
In my Dialogflow agent, I have created an intent which doesn't have any user_says
example in it, has one event "testr" named in it, has an answer text and 'webhook'
option checked in fulfillment.
And finally is there any better method to implement what I am trying to implement?