How to get logs from Docker image running in AWS Lambda function? - python

I'm trying to debug an AWS Lambda function that's using a Docker image, as described here. I'm using the stock AWS Python image: public.ecr.aws/lambda/python:3.8
I'm able to follow the steps described in the above link to test my function locally and it works just fine:
docker run -p 9000:8080 hello-world, followed by curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' in another Terminal window properly performs the function I'm expecting. However once this is running in Lambda, after successfully tagging the image and pushing it to AWS ECR, the function doesn't seem to be working and I'm not able to find any logs to debug the failed/missing executions.
I'm at a bit of a loss in terms of where these logs are stored, and/or what configuration I may be missing to get these logs into CloudWatch or something similar. Where can I expect to find these logs to further debug my lambda function?

So, there are no technical diferences from working with docker images with lambda compated to the code as zip or in s3. As for the logs, according to AWS documentation (and this is the description directly from the docs):
AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.
You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/.
So, the most basic code would have some sort of logging within your lambda. My suggestion in this case to troubleshoot:
1 - Like in the image bellow, go to your lambda function and try access the cloudwatch logs directly from the console. Make sure to confirm the default region in which your function was deployed.
2 - If the logs exists (the group for the lambda function exists), the check if there are any raise exceptions from your code.
3 - If there are any errors indicating that the group log for cloudwatch doesn't exist or that the group log from the function doesnt exist, then check the configurations from your lambda directly in the console or, if you are using a framework like serverless or cloudwatch, the code structure.
4 - Finally, if everything seems ok this could be only related to one simple thing. User permissions from your account or Role permission from you lambda function (which is mostly the case for these situations).
One thing that you should check is the basic role generated from your lambda, which ensures that you can create new log groups
One policy example should be something like this (You can also add manually the CloudWatch Logs policy, the effect should be similar):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:us-east-1:XXXXXXXXXX:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-1:XXXXXXXXXX:log-group:/aws/lambda/<YOUR-LAMBDA-FUNCTION>r:*"
]
}
]
}
More related to this issue can be found here:
https://aws.amazon.com/pt/premiumsupport/knowledge-center/lambda-cloudwatch-log-streams-error/
I say this because but I have used frequently docker for code dependencies with lambda, based on this first tutorial from when this feature was introduced.
https://aws.amazon.com/pt/blogs/aws/new-for-aws-lambda-container-image-support/
Hopefully this was helpfull!
Feel free to leave additional comments.

For a special case when you are using serverless framework, I had to use the following to get the logs in the cloudwatch.
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event: dict, context: dict) -> dict:
logger.info(json.dumps(event))
# ...
return {'statusCode': 200, 'body': json_str}
For my case, the lambda function runs inside ecr docker container.

Related

Get environment name in AWS Lambda function in python

I want to get the environment name (dev/qa/prod) where my AWS Lambda function is running or being executed programatically. I don't want to give as part of my environment variables.
How do we do that?
In AWS, everything is in "Production". That is, there is no concept of "AWS for Dev" or "AWS for QA".
It is up to you to create resources that you declare to be Dev, QA or Production. Some people do this in different AWS Accounts, or at least use different VPCs for each environment.
Fortunately, you mention that "each environment in AWS has a different role".
This means that the AWS Lambda function can call get_caller_identity() to obtain "details about the IAM user or role whose credentials are used to call the operation":
import boto3
def lambda_handler(event, context):
sts_client = boto3.client('sts')
print(sts_client.get_caller_identity())
It returns:
{
"UserId": "AROAJK7HIAAAAAJYPQN7E:My-Function",
"Account": "111111111111",
"Arn": "arn:aws:sts: : 111111111111:assumed-role/my-role/My-Function",
...
}
}
Thus, you could extract the name of the Role being used from the Arn.
Your lambda function can do one of the following :
def handler_name(event, context): - read the data from the event dict. The caller will have to add this argument (since I dont know what is the lambda trigger I cant tell if it is a good solution)
Read the data from S3 (or other storage like DB)
Read the data from AWS Systems Manager Parameter Store
I don't want to give as part of my environment variables
Why?

Invoke Google Cloud Function with Parameters

I am trying to pass a parameter "env" while triggering/invoking a cloud function using Scheduler with http.
I am using a service account that has sufficient permissions for invoking functions and admin rights on Scheduler.
Passing the parameter works when the function allows un-authenticated invocation, but if the function is deployed with authentication it gives an error: { "status": "UNAUTHENTICATED" ....
It is worth noting that when I changed the function code that it does not require a parameter, it worked successfully with the same service account.
So, it must be an issue with passing parameters.
The scheduler job setup looks like this:
The way I retrieve the parameter "env" in the function is as
def fetchtest(request):
env = request.args.get('env')

How to disable default log messages from lambda in python

I have an AWS Lambda function written in python, and i need only the messages I log in CloudWatch Logs.
I have tried the example given in watchtower, but it still didn't work.
START RequestId: d0ba05dc-8506-11e8-82ab-afe2adba36e5 Version: $LATEST
(randomiser) Hello from Lambda
END RequestId: d0ba05dc-8506-11e8-82ab-afe2adba36e5
REPORT RequestId: d0ba05dc-8506-11e8-82ab-afe2adba36e5
Duration: 0.44 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 21 MB*
From the above I only need (randomiser) Hello from Lambda to be logged in CloudWatch, without the START, END and REPORT lines.
If you have logs enabled, you are always going to get the default logs. No way you can disable them.
However there might be cases where you want one specific Lambda function to not send logs at all. You can solve this by creating a new role specifically for that Lambda function, and not have the logging permission there.
FWIW, if you need to toggle between logging and no logging frequently, you can have a policy file as the following.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
and change the "Deny" to "Allow" when you require logging.
In the AWS Lambda configuration you'll have a CloudWatch trigger configured so that the lambda is triggered by new log entries in CloudWatch. In that trigger configuration, you can specify a filter pattern, and - if you do - only those log lines that match the filter will be forwarded to your lambda.
The caveat (according to https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax) seems to be that the filter operates on JSON data only, I have not found a filter that operates on plain text (though, if you put your log message in quotes, it's potentially a valid JSON string and can be matched by the filter.
There is no direct way to disable these logs.
However, a simple workaround is to remove the CloudWatch Logs permission from the Lambda execution role.
Lambda function uses this role to access other AWS services, if you remove CloudWatch permission it will not be able to push logs to CloudWatch.
Note: if you do this you will not able to push any logs from lambda to CloudWatch

Errors while using sagemaker api to invoke endpoints

I've deployed an endpoint in sagemaker and was trying to invoke it through my python program. I had tested it using postman and it worked perfectly ok. Then I wrote the invocation code as follows
import boto3
import pandas as pd
import io
import numpy as np
def np2csv(arr):
csv = io.BytesIO()
np.savetxt(csv, arr, delimiter=',', fmt='%g')
return csv.getvalue().decode().rstrip()
runtime= boto3.client('runtime.sagemaker')
payload = np2csv(test_X)
runtime.invoke_endpoint(
EndpointName='<my-endpoint-name>',
Body=payload,
ContentType='text/csv',
Accept='Accept'
)
Now whe I run this I get a validation error
ValidationError: An error occurred (ValidationError) when calling the InvokeEndpoint operation: Endpoint <my-endpoint-name> of account <some-unknown-account-number> not found.
While using postman i had given my access key and secret key but I'm not sure how to pass it when using sagemaker apis. I'm not able to find it in the documentation also.
So my question is, how can I use sagemaker api from my local machine to invoke my endpoint?
I also had this issue and it turned out to be my region was wrong.
Silly but worth a check!
When you are using any of the AWS SDK (including the one for Amazon SageMaker), you need to configure the credentials of your AWS account on the machine that you are using to run your code. If you are using your local machine, you can use the AWS CLI flow. You can find detailed instructions on the Python SDK page: https://aws.amazon.com/developers/getting-started/python/
Please note that when you are deploying the code to a different machine, you will have to make sure that you are giving the EC2, ECS, Lambda or any other target a role that will allow the call to this specific endpoint. While in your local machine it can be OK to give you admin rights or other permissive permissions, when you are deploying to a remote instance, you should restrict the permissions as much as possible.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sagemaker:InvokeEndpoint",
"Resource": "arn:aws:sagemaker:*:1234567890:endpoint/<my-endpoint-name>"
}
]
}
Based on #Jack's answer, I ran aws configure and changed the default region name and it worked.

send email whenever ec2 is shut down using serverless

i am new to serverless framework and aws, and i need to create a lambda function on python that will send email whenever an ec2 is shut down, but i really don't know how to do it using serverless. So please if any one could help me do that or at least give me some tracks to start with.
You can use CloudWatch for this.
You can create a cloudwatch rule
Service Name - Ec2
Event Type - EC2 Instance change notification
Specific state(s) - shutting-down
Then use an SNS target to deliver email.
Using serverless, you can define the event trigger for your function like this...
functions:
shutdownEmailer:
handler: shutdownEmailer.handler
events:
- cloudwatchEvent:
event:
source:
- "aws.ec2"
detail-type:
- "EC2 Instance State-change Notification"
detail:
state:
- shutting down
enabled: true
Then, you can expect your lambda to be called every time that event happens.
What you want is a CloudWatch Event.
In short, a CloudWatch event is capable of triggering a Lambda function and passing it something like this:
{
"version": "0",
"id": "123-456-abc",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "1234567",
"time": "2015-11-11T21:36:16Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:12312312312:instance/i-abcd4444"
],
"detail": {
"instance-id": "i-abcd4444",
"state": "shutting-down"
}
From there, you can parse this information in your Python code running on Lambda. To get Instance ID of shutting-down instance, you will use something like this:
instance_id = event["detail"]["instance-id"]
Then you can use Amazon SES (Simple Email Service) API with help from official boto3 library and send an email. See: http://boto3.readthedocs.io/en/latest/reference/services/ses.html#SES.Client.send_email
Of course, you will also need a proper IAM role with necessary privileges to use SES attached to your Lambda function. You can make a new one easily on AWS IAM Roles page.
It might seem overwhelming at first, for starters:
go to https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#rules:action=create (if link is broken: AWS Dashboard > CloudWatch > Rules)
Create a new rule.
Under "Event Source" select EC2 as Service Name, and "EC2 Instance State-change Notification" as Event Type.
Click on "Specific States". You can simply select "shutting-down" here but I would also choose "stopping" and "terminated" just to make sure.
Save it, go to Lambda, add this Event in Triggers tab and start writing your code.

Categories

Resources