Put (Lambda) subscription on CW Logs group - python

I am trying to put a subscription on a CW log group from a Lambda Function that is scanning for lambdas with the right tag. When calling the put_subscription_filter an Error is thrown:
"An error occurred (InvalidParameterException) when calling the PutSubscriptionFilter
operation: Could not execute the lambda function. Make sure you have given CloudWatch Logs
permission to execute your function."
Stated in the docs for put subscription filter iam:PassRole permission is needed. I have grant this. I have made sure it is not a premission issue for the Lambda function by giving it full admin rights.
By reading the error it indicates it is CW Logs that need permission to execute a function, my guess is that it is the subscribe destination function that they may mean. I have tried a lot of different things here but still no cigar.
Setting a subscription filter in the console is straight forward and no policy is modified or created as I can see.
Does any one have experience of this or any input?

You need to add lambda Invoke Permission so that CloudWatch can send and execute lambda when logs are available
Using AWS CLI is simplest way
aws lambda add-permission \
--function-name "helloworld" \
--statement-id "helloworld" \
--principal "logs.region.amazonaws.com" \
--action "lambda:InvokeFunction" \
--source-arn "arn:aws:logs:region:123456789123:log-group:TestLambda:*"
Using console
1. Go to Lambda Function
2. Configuration -> Permissions tab
3. Scroll down and Click Add permissions
4. Choose "AWS service"
5. Principal - CloudWatch log group ARN
6. Action - Lambda:InvokeFunction
7. Statement Id - policy statement name, anything meaningful
8. Save
Once done through CLI or console, try creating CloudWatch subscription to that lambda

Related

Lambda function cannot PutItem in DynamoDB database

Although I have given the function IAM profile complete AdministratorAccess permissions, along with AmazonDynamoDBFullAccess permissions as well, every time that I test the function I am greeted with the same error message:
no identity-based policy allows the dynamodb:PutItem action.
How do I fix this? I literally cannot give the IAM profile more access, so I am very confused. I have given every permission I can give.
2 things I can think on
Check you are assigning the policies to the Lambda Execution Role.
https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html
Are you part of an organization, check that you no not have any SCP policies in place preventing the PutItem as it would take precedence.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html

Limiting EFS access on AWS Lambda for a user

Background
I have an AWS Lambda (of Image type) configured through a Dockerfile. The Lambda function should execute an arbitrary python code (the code is sent from a user, and therefore can be malicious and should be restricted).
There is important information stored in the EFS (mounted on /mnt/efs) which should be accessible from the Lambda, but not from the user's code. The EFS access point is configured as:
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: '1000'
Gid: '1000'
RootDirectory:
CreationInfo:
OwnerGid: '1000'
OwnerUid: '1000'
Permissions: '0777'
Path: '/mnt/efs'
Initial idea that did not work
Restrict AccessPointResource to allow reads for only a specific group
Include the main lambda user in the group
Create a Linux user that is not in the group
When running the submitted code, run under the newly created user's credentials
The reasons why it didn't work:
When creating a user in the Dockerfile, the user disappears when deploying the image
Tried creating the user with both RUN /usr/sbin/useradd -ms /bin/bash coderunner and in the entrypoint.sh
Tried creating the user inside the lambda (in the python code) - Permission denied (the main user of the lambda does not have permissions to access /usr/sbin/useradd)
When specifying the user for Popen following the guide, all the commands fail with permission denied - for any user even (with the current one).
Additional information
AWS lambda seems to reset all the users and their permissions when the docker image is deployed to match the Lambda restrictions
It creates ~150 other users to manage the access within the Lambda image
When printing os.getuid(), getpass.getuser(), os.getgroups() we get 993 sbx_user1051 []
When printing cat /etc/passwd we get ~150 users and none of them is the user that we created (coderunner)
The main question
Is there a way of permitting the main AWS Lambda code to access the EFS on /mnt/efs but restricting the access for a code launched through a python subprocess?

How to get logs from Docker image running in AWS Lambda function?

I'm trying to debug an AWS Lambda function that's using a Docker image, as described here. I'm using the stock AWS Python image: public.ecr.aws/lambda/python:3.8
I'm able to follow the steps described in the above link to test my function locally and it works just fine:
docker run -p 9000:8080 hello-world, followed by curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' in another Terminal window properly performs the function I'm expecting. However once this is running in Lambda, after successfully tagging the image and pushing it to AWS ECR, the function doesn't seem to be working and I'm not able to find any logs to debug the failed/missing executions.
I'm at a bit of a loss in terms of where these logs are stored, and/or what configuration I may be missing to get these logs into CloudWatch or something similar. Where can I expect to find these logs to further debug my lambda function?
So, there are no technical diferences from working with docker images with lambda compated to the code as zip or in s3. As for the logs, according to AWS documentation (and this is the description directly from the docs):
AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.
You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/.
So, the most basic code would have some sort of logging within your lambda. My suggestion in this case to troubleshoot:
1 - Like in the image bellow, go to your lambda function and try access the cloudwatch logs directly from the console. Make sure to confirm the default region in which your function was deployed.
2 - If the logs exists (the group for the lambda function exists), the check if there are any raise exceptions from your code.
3 - If there are any errors indicating that the group log for cloudwatch doesn't exist or that the group log from the function doesnt exist, then check the configurations from your lambda directly in the console or, if you are using a framework like serverless or cloudwatch, the code structure.
4 - Finally, if everything seems ok this could be only related to one simple thing. User permissions from your account or Role permission from you lambda function (which is mostly the case for these situations).
One thing that you should check is the basic role generated from your lambda, which ensures that you can create new log groups
One policy example should be something like this (You can also add manually the CloudWatch Logs policy, the effect should be similar):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:us-east-1:XXXXXXXXXX:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-1:XXXXXXXXXX:log-group:/aws/lambda/<YOUR-LAMBDA-FUNCTION>r:*"
]
}
]
}
More related to this issue can be found here:
https://aws.amazon.com/pt/premiumsupport/knowledge-center/lambda-cloudwatch-log-streams-error/
I say this because but I have used frequently docker for code dependencies with lambda, based on this first tutorial from when this feature was introduced.
https://aws.amazon.com/pt/blogs/aws/new-for-aws-lambda-container-image-support/
Hopefully this was helpfull!
Feel free to leave additional comments.
For a special case when you are using serverless framework, I had to use the following to get the logs in the cloudwatch.
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event: dict, context: dict) -> dict:
logger.info(json.dumps(event))
# ...
return {'statusCode': 200, 'body': json_str}
For my case, the lambda function runs inside ecr docker container.

What is "AWS_LOG_STREAM" in amazon-CloudWatch?

cloudwatch.CloudwatchHandler('AWS_KEY_ID','AWS_SECRET_KEY','AWS_REGION','AWS_LOG_GROUP','AWS_LOG_STREAM')
I am new to AWS cloudwatch and I am trying to use cloudwatch lightweight handler in my python project. I have all the values required for .CloudwatchHandler() except AWS_LOG_STREAM. I am not understanding what is AWS_LOG_STREAM where I can i find that value in the AWS console. I googled "A log stream is a sequence of log events that share the same source." but does it mean "same source". And what is the value for AWS_LOG_STREAM?
I need support and thank you in advance.
As Mohit said, the log stream is a subdivision of the log group, usually to identify the original execution source (time and ID of the container, lambda or process is common)
In the latest version you can skip naming the log stream which will give it a timestamp log stream name:
handler = cloudwatch.CloudwatchHandler(log_group = 'my_log_group')
Disclaimer: I am a contributor to the cloudwatch package
AWS_LOG_STREAM is basically log group events divided based on execution time. by specifying a stream you're getting logs for a specific time duration rather than since inception.
example: incase of AWS Lambda, you can check it's current log stream by
LOG_GROUP=log-group
aws logs get-log-events --log-group-name $LOG_GROUP --log-stream-name aws logs describe-log-streams --log-group-name $LOG_GROUP --max-items 1 --order-by LastEventTime --descending --query logStreams[].logStreamName --output text | head -n 1 --query events[].message --output text
else in python, you can use boto3 to fetch existing log streams and then call cloudwatch handler with the respective stream name
[https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/logs.html#CloudWatchLogs.Client.describe_log_streams]

Boto3 - Wait until AWS Database Migration Task is deleted

Requirement: Delete DMS Task, DMS Endpoints and Replication Instance.
Use : Boto3 python script in Lambda
My Approach:
1. Delete the Database Migration Task first as Endpoint and Replication Instance cant be deleted before deleting this.
2. Delete Endpoints
3. Delete Replication Instance
Issue: When i am running these 3 delete commands, i get the following error
"errorMessage": "An error occurred (InvalidResourceStateFault) when calling the DeleteEndpoint operation:Endpoint arn:aws:dms:us-east-1:XXXXXXXXXXXXXX:endpoint:XXXXXXXXXXXXXXXXXXXXXX is part of one or more ReplicationTasks.
Here i know that Data migration task will take some time to delete. So till then Endpoint will be occupied by Task. So we cant delete it.
There is a aws cli command to check whether task is deleted or not - replication-task-deleted.
I can run this in shell and wait(sleep) until i get the final status and then execute delete Endpoint script.
There is no equivalent command in Boto3 DMS docs
Is there any other Boto3 command i can use to check the status and make my python script sleep till that time?
Please let me know if i can approach the the issue in different way.
You need to use waiters In your case the Waiter.ReplicationTaskDeleted

Categories

Resources