How to disable default log messages from lambda in python - python

I have an AWS Lambda function written in python, and i need only the messages I log in CloudWatch Logs.
I have tried the example given in watchtower, but it still didn't work.
START RequestId: d0ba05dc-8506-11e8-82ab-afe2adba36e5 Version: $LATEST
(randomiser) Hello from Lambda
END RequestId: d0ba05dc-8506-11e8-82ab-afe2adba36e5
REPORT RequestId: d0ba05dc-8506-11e8-82ab-afe2adba36e5
Duration: 0.44 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 21 MB*
From the above I only need (randomiser) Hello from Lambda to be logged in CloudWatch, without the START, END and REPORT lines.

If you have logs enabled, you are always going to get the default logs. No way you can disable them.
However there might be cases where you want one specific Lambda function to not send logs at all. You can solve this by creating a new role specifically for that Lambda function, and not have the logging permission there.
FWIW, if you need to toggle between logging and no logging frequently, you can have a policy file as the following.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
and change the "Deny" to "Allow" when you require logging.

In the AWS Lambda configuration you'll have a CloudWatch trigger configured so that the lambda is triggered by new log entries in CloudWatch. In that trigger configuration, you can specify a filter pattern, and - if you do - only those log lines that match the filter will be forwarded to your lambda.
The caveat (according to https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax) seems to be that the filter operates on JSON data only, I have not found a filter that operates on plain text (though, if you put your log message in quotes, it's potentially a valid JSON string and can be matched by the filter.

There is no direct way to disable these logs.
However, a simple workaround is to remove the CloudWatch Logs permission from the Lambda execution role.
Lambda function uses this role to access other AWS services, if you remove CloudWatch permission it will not be able to push logs to CloudWatch.
Note: if you do this you will not able to push any logs from lambda to CloudWatch

Related

How to get logs from Docker image running in AWS Lambda function?

I'm trying to debug an AWS Lambda function that's using a Docker image, as described here. I'm using the stock AWS Python image: public.ecr.aws/lambda/python:3.8
I'm able to follow the steps described in the above link to test my function locally and it works just fine:
docker run -p 9000:8080 hello-world, followed by curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' in another Terminal window properly performs the function I'm expecting. However once this is running in Lambda, after successfully tagging the image and pushing it to AWS ECR, the function doesn't seem to be working and I'm not able to find any logs to debug the failed/missing executions.
I'm at a bit of a loss in terms of where these logs are stored, and/or what configuration I may be missing to get these logs into CloudWatch or something similar. Where can I expect to find these logs to further debug my lambda function?
So, there are no technical diferences from working with docker images with lambda compated to the code as zip or in s3. As for the logs, according to AWS documentation (and this is the description directly from the docs):
AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.
You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/.
So, the most basic code would have some sort of logging within your lambda. My suggestion in this case to troubleshoot:
1 - Like in the image bellow, go to your lambda function and try access the cloudwatch logs directly from the console. Make sure to confirm the default region in which your function was deployed.
2 - If the logs exists (the group for the lambda function exists), the check if there are any raise exceptions from your code.
3 - If there are any errors indicating that the group log for cloudwatch doesn't exist or that the group log from the function doesnt exist, then check the configurations from your lambda directly in the console or, if you are using a framework like serverless or cloudwatch, the code structure.
4 - Finally, if everything seems ok this could be only related to one simple thing. User permissions from your account or Role permission from you lambda function (which is mostly the case for these situations).
One thing that you should check is the basic role generated from your lambda, which ensures that you can create new log groups
One policy example should be something like this (You can also add manually the CloudWatch Logs policy, the effect should be similar):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:us-east-1:XXXXXXXXXX:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-1:XXXXXXXXXX:log-group:/aws/lambda/<YOUR-LAMBDA-FUNCTION>r:*"
]
}
]
}
More related to this issue can be found here:
https://aws.amazon.com/pt/premiumsupport/knowledge-center/lambda-cloudwatch-log-streams-error/
I say this because but I have used frequently docker for code dependencies with lambda, based on this first tutorial from when this feature was introduced.
https://aws.amazon.com/pt/blogs/aws/new-for-aws-lambda-container-image-support/
Hopefully this was helpfull!
Feel free to leave additional comments.
For a special case when you are using serverless framework, I had to use the following to get the logs in the cloudwatch.
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event: dict, context: dict) -> dict:
logger.info(json.dumps(event))
# ...
return {'statusCode': 200, 'body': json_str}
For my case, the lambda function runs inside ecr docker container.

Logging custom metrics using AWS Cloudwatch Agent and Python

We send up custom metrics to AWS using Python (see existing code below) and separately use the AWS CloudWatch Agent to send up metrics for our EC2 machine. However, we'd like to stop sending the custom metrics through a boto client and instead send them up using the AWS CloudWatch agent.
I've found details on how to send up custom metrics from StatsD and collectd, but it's unclear how to send up your own custom metrics. I'm guessing we'll have to export our metrics in a similar data format to one of these, but it's unclear how to do that. In summary, we need to:
Export the metric in Python to a log file in the right format
Update the AWS CloudWatch Agent to read from those log files and upload the metric
Does anyone have an example that covers that?
Existing Code
import boto3
cloudwatch = boto3.client(
service_name="cloudwatch",
region_name=env["AWS_DEPLOYED_REGION"],
api_version="2010-08-01",
)
cloudwatch.put_metric_data(
Namespace="myNameSpace",
MetricData=[
{
"MetricName": "someName",
"Dimensions": [
{"Name": "Stage", "Value": "..."},
{"Name": "Purpose", "Value": "..."},
],
"Values": values,
"StorageResolution": 60,
"Unit": "someUnit",
},
],
)
CloudWatch Agent supports StatsD or CollectD for collecting custom metrics. There is no support for using the AWS CloudWatch SDK and pointing it to the CW Agent.
To use StatsD or CollectD, you just follow the documentation for that specific tool. Then CloudWatch provide an adapter for both that interface to the CloudWatch Agent as I linked above. This is generally useful for people who already use StatsD or CollectD for custom and application metrics however its clearly painful in your case as you will have to onboard to either or in order to achieve you desired effect.
You can create CloudWatch agent config files in /etc/amazon/amazon-cloudwatch-agent/amazon-cloudwatch-agent.d/ directory.
The config file should be like,
{
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "path_to_log_file/app1.log",
"log_group_name": "/app/custom.log",
"log_stream_name": "{instance_id}"
}
]
}
}
}
}
Restarting the cw agent will consider this configuration automatically.
One more way is to attach config files manually using the command,
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -m ec2 -s -c file:/path_to_json/custom_log.json
This log group will be available in the CloudWatch Logs console.

Errors while using sagemaker api to invoke endpoints

I've deployed an endpoint in sagemaker and was trying to invoke it through my python program. I had tested it using postman and it worked perfectly ok. Then I wrote the invocation code as follows
import boto3
import pandas as pd
import io
import numpy as np
def np2csv(arr):
csv = io.BytesIO()
np.savetxt(csv, arr, delimiter=',', fmt='%g')
return csv.getvalue().decode().rstrip()
runtime= boto3.client('runtime.sagemaker')
payload = np2csv(test_X)
runtime.invoke_endpoint(
EndpointName='<my-endpoint-name>',
Body=payload,
ContentType='text/csv',
Accept='Accept'
)
Now whe I run this I get a validation error
ValidationError: An error occurred (ValidationError) when calling the InvokeEndpoint operation: Endpoint <my-endpoint-name> of account <some-unknown-account-number> not found.
While using postman i had given my access key and secret key but I'm not sure how to pass it when using sagemaker apis. I'm not able to find it in the documentation also.
So my question is, how can I use sagemaker api from my local machine to invoke my endpoint?
I also had this issue and it turned out to be my region was wrong.
Silly but worth a check!
When you are using any of the AWS SDK (including the one for Amazon SageMaker), you need to configure the credentials of your AWS account on the machine that you are using to run your code. If you are using your local machine, you can use the AWS CLI flow. You can find detailed instructions on the Python SDK page: https://aws.amazon.com/developers/getting-started/python/
Please note that when you are deploying the code to a different machine, you will have to make sure that you are giving the EC2, ECS, Lambda or any other target a role that will allow the call to this specific endpoint. While in your local machine it can be OK to give you admin rights or other permissive permissions, when you are deploying to a remote instance, you should restrict the permissions as much as possible.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sagemaker:InvokeEndpoint",
"Resource": "arn:aws:sagemaker:*:1234567890:endpoint/<my-endpoint-name>"
}
]
}
Based on #Jack's answer, I ran aws configure and changed the default region name and it worked.

send email whenever ec2 is shut down using serverless

i am new to serverless framework and aws, and i need to create a lambda function on python that will send email whenever an ec2 is shut down, but i really don't know how to do it using serverless. So please if any one could help me do that or at least give me some tracks to start with.
You can use CloudWatch for this.
You can create a cloudwatch rule
Service Name - Ec2
Event Type - EC2 Instance change notification
Specific state(s) - shutting-down
Then use an SNS target to deliver email.
Using serverless, you can define the event trigger for your function like this...
functions:
shutdownEmailer:
handler: shutdownEmailer.handler
events:
- cloudwatchEvent:
event:
source:
- "aws.ec2"
detail-type:
- "EC2 Instance State-change Notification"
detail:
state:
- shutting down
enabled: true
Then, you can expect your lambda to be called every time that event happens.
What you want is a CloudWatch Event.
In short, a CloudWatch event is capable of triggering a Lambda function and passing it something like this:
{
"version": "0",
"id": "123-456-abc",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "1234567",
"time": "2015-11-11T21:36:16Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:12312312312:instance/i-abcd4444"
],
"detail": {
"instance-id": "i-abcd4444",
"state": "shutting-down"
}
From there, you can parse this information in your Python code running on Lambda. To get Instance ID of shutting-down instance, you will use something like this:
instance_id = event["detail"]["instance-id"]
Then you can use Amazon SES (Simple Email Service) API with help from official boto3 library and send an email. See: http://boto3.readthedocs.io/en/latest/reference/services/ses.html#SES.Client.send_email
Of course, you will also need a proper IAM role with necessary privileges to use SES attached to your Lambda function. You can make a new one easily on AWS IAM Roles page.
It might seem overwhelming at first, for starters:
go to https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#rules:action=create (if link is broken: AWS Dashboard > CloudWatch > Rules)
Create a new rule.
Under "Event Source" select EC2 as Service Name, and "EC2 Instance State-change Notification" as Event Type.
Click on "Specific States". You can simply select "shutting-down" here but I would also choose "stopping" and "terminated" just to make sure.
Save it, go to Lambda, add this Event in Triggers tab and start writing your code.

Athena query fails with boto3 (S3 location invalid)

I'm trying to execute a query in Athena, but it fails.
Code:
client.start_query_execution(QueryString="CREATE DATABASE IF NOT EXISTS db;",
QueryExecutionContext={'Database': 'db'},
ResultConfiguration={
'OutputLocation': "s3://my-bucket/",
'EncryptionConfiguration': {
'EncryptionOption': 'SSE-S3'
}
})
But it raises the following exception:
botocore.errorfactory.InvalidRequestException: An error occurred (InvalidRequestException)
when calling the StartQueryExecution operation: The S3 location provided to save your
query results is invalid. Please check your S3 location is correct and is in the same
region and try again. If you continue to see the issue, contact customer support
for further assistance.
However, if I go to the Athena Console, go to Settings and enter the same S3 location (for example):
the query runs fine.
What's wrong with my code? I've used the API of several the other services (eg, S3) successfully, but in this one I believe I'm passing some incorrect parameters. Thanks.
Python: 3.6.1. Boto3: 1.4.4
I had to add a 'athena-' prefix to my bucket to get it to work. For example, in stead of:
"s3://my-bucket/"
Try:
"s3://athena-my-bucket/"
EDIT: As suggested by Justin, AWS later added support for Athena by adding athena prefix to the bucket. Please upvote his answer.
Accepted Answer:
The S3 location provided to save your query results is invalid. Please check your S3 location is correct and is in the same region and try again.
Since it works when you use the console, it is likely the bucket is in a different region than the one you are using in Boto3. Make sure you use the correct region (the one that worked in the console) when constructing the Boto3 client. By default, Boto3 will use the region configured in the credentials file.
Alternatively try boto3.client('athena', region_name = '<region>')
Ran into the same issue and needed to specify the S3 bucket in the client.
In my case, IAM role didn't have all the permissions for the S3 bucket. I gave IAM role following permissions for Athena results bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::athena_results_bucket",
"arn:aws:s3:::athena_results_bucket"
],
"Effect": "Allow"
}
]
}
I received the OP error, attempted Justin's answer, and got the following error
SYNTAX_ERROR: line 1:15: Schema TableName does not exist
Meaning that it was not able to find the tables that I had previously created through the AWS Athena UI.
The simple solution was to use dclaze's answer instead. These two answers cannot be used simultaneously, or you will get back the initial (OP) error.

Categories

Resources