How can I stop AWS lambda from recursive invocations - python

I have a lambda function that will read an excel file and do some stuffs and then store the result in a different S3 bucket.
def lambda_handler(event, context):
try:
status = int(event['status'])
if status:
Reading_Bucket_Name = 'read-data-s3'
Writing_Bucket_Name = 'write-excel-file-bucket'
rowDataFile = 'Analyse.xlsx'
HTMLfileName = 'index.html'
url = loading_function(Reading_Bucket_Name=Reading_Bucket_Name, Writing_Bucket_Name=Writing_Bucket_Name, rowDataFile=rowDataFile, HTMLfileName=HTMLfileName)
status = 0
return {"statusCode": 200, "URL": url}
else:
return {"statusCode": 400, "Error": "The code could not be executed"}
except Exception as e:
print('#________ An error occurred while reading Status code int(event[status]) ________#')
print(e)
raise e
return None
The code is only supposed to work once! And that it returns the URL and then turns off and exit the Lambda function.
But the problem is: I will get the first output, and then the lambda function will call itself again! And it will go to the exception and execute it at least many times! Because there is no event['status']. 'This must be received if I call this function by:
{
"status": "1"
}
How can I stop execution after getting the first output?
Update:
This will cause the problem by uploading a new file to an S3 bucket:
s3_client = boto3.client('s3')
fig.write_html('/tmp/' + HTMLfileName, auto_play=False)
response = s3_client.upload_file('/tmp/' + HTMLfileName, Writing_Bucket_Name, HTMLfileName, ExtraArgs={'ACL':'public-read', 'ContentType':'text/html'})
return True

Given that the Lambda function appears to be running when a new object is created in the Amazon S3 bucket, it would appear that the bucket has been configured with an Event Notification that is triggering the AWS Lambda function.
To check this, go the to bucket in the S3 management console, go to the Properties tab and scroll down to Event notifications. Look for any configured events that trigger a Lambda function.

Related

How to get rid of "Key" in S3 put event in Lambda test event?

I am trying to put my CSV file from S3 to DynamoDB through the Lambda function. In the first stage, I was uploading my .csv file manually in S3 manually. When uploading the file manually, I know the name of the file, and I put this file name as a key in the test event and it works file.
I want to automate things because my .csv files are automatically generated in the S3 and I don't know what will be the name of the next file. Someone suggested me to create trigger in S3 that will invoke your Lambda on every file generation. The only issue I am dealing with is what to put in the test event at the place of "key", where we are supposed to put a file name whose data we want to fetch from S3.
I don't have a file name now. Following is the Lambda code:
import json
import boto3
s3_client = boto3.client("s3")
dynamodb = boto3.resource("dynamodb")
student_table = dynamodb.Table('AgentMetrics')
def lambda_handler(event, context):
source_bucket_name = event['Records'][0]['s3']['bucket']['name']
file_name = event['Records'][0]['s3']['object']['key']
file_object = s3_client.get_object(Bucket=source_bucket_name,Key=file_name)
print("file_object :",file_object)
file_content = file_object['Body'].read().decode("utf-8")
print("file_content :",file_content)
students = file_content.split("\n")
print("students :",students)
for student in students:
data = student.split(",")
try:
student_table.put_item(
Item = {
"Agent" : data[0],
"StartInterval" : data[1],
"EndInterval" : data[2],
"Agent idle time" : data[3],
"Agent on contact time" : data[4],
"Nonproductive time" : data[5],
"Online time" : data[6],
"Lunch Break time" : data[7],
"Service level 120 seconds" : data[8],
"After contact work time" : data[9],
"Contacts handled" : data[10],
"Contacts queued" : data[11]
} )
except Exception as e:
print("File Completed")
The error I am facing is ["errorMessage": "An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.",
"errorType": "NoSuchKey",]
Kindly help me here, I am getting frustrated because of this issue. I would really appreciate any help, thanks.
As suggested in your question you have to add trigger to S3 bucket on action POST, PUT OR DELETE whichever action need to track.
Here is more details :
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
Select Lambda either python or nodeJs whichever you prefer from blueprint option
Then select S3 bucket and action like PUT, POST OR DELETE or all.
Write your above code make entry in db in this lambda.
The "Test Event" you are using is an example of a message that Amazon S3 will send to your Lambda function when a new object is created in the bucket.
When S3 triggers your AWS Lambda function, it will provide details of the object that trigger the event. Your program will then use the event supplied by S3. It will not use your Test Event.
One more thing...
It is possible that the Lambda function will be triggered with more than one object being passed via the event. Your function should be able to handle this happening. You can do this by adding a for loop:
import urllib
...
def lambda_handler(event, context):
for record in event['Records']:
source_bucket_name = record['s3']['bucket']['name']
file_name = urllib.parse.unquote_plus(record['s3']['object']['key'])
...

How to error handle in lambda function python

I am new to python. Is there anyway I can raise error in Lambda function? I have a pyqt5 button that connect to a lambda function. I would like to catch the error in lambda function and show it to user. Is there anyway I can bring the message(string) out of the lambda function and then show it to user or is there anyway I can raise an error inside lambda function? I have a try/except in the place where lambda was apply. What I hope is when the lambda function have error and the error can be catch outside of the lambda function.
example:
def test1:
*****
def test2(a):
****
try:
x=lambda(test2(test1))
except Exception as e:
print(e) <<< want it to go here.
In your code you only assign a lambda function to a variable without any calling. In addition, the syntax for lambda seems to be wrong. You could change your code something like as follows:
def test1:
*****
def test2(a):
****
a_func = lambda x: test2(test1(x))
try:
a_func(some_input)
except Exception as e:
print(e)
import os
import boto3
import json
from botocore.exceptions import ClientError
def lambda_handler(event, context):
# Fetch Datebase Identifier Environment variable
DBinstance = os.environ['DBInstanceName']
#create an Rds client
context.rds_client = boto3.client('rds')
# Describe instance
response = context.rds_client.describe_db_instances(DBInstanceIdentifier=DBinstance )
#Check the status of rds instance
status = response['DBInstances'][0]['DBInstanceStatus']
# If Rds Instance is in stopped stop, It will be started
if status == "stopped":
try:
#Stopping the Rds instance
response = context.rds_client.start_db_instance( DBInstanceIdentifier=DBinstance )
#send logs to cloudwatch
print ('Success :: Starting Rds instance:', DBinstance)
#Logs to Lambda function Execution results
return {
'statusCode': 200,
'message': "Starting Rds instance",
'body': json.dumps(response, default=str)
}
except ClientError as e:
#send logs to cloudwatch
print(e)
message = e
#Logs to Lambda function Execution results
return {
'message': "Script execution completed. See Cloudwatch logs for complete output, but instance starting failed",
'body': json.dumps(message, default=str)
}
# if Rds instance is in running state, It will be stopped
elif status == "available":
try:
#Stopping the Rds instance
response = context.rds_client.stop_db_instance( DBInstanceIdentifier=DBinstance )
#send logs to cloudwatch
print ('Success :: Stopping Rds instance:', DBinstance)
#Logs to Lambda function Execution results
return {
'statusCode': 200,
'message': "Stopping Rds instance",
'body': json.dumps(response, default=str)
}
except ClientError as e:
#send logs to cloudwatch
print(e)
message = e
#Logs to Lambda function Execution results
return {
'message': "Script execution completed. See Cloudwatch logs for complete output, but instance stopping failed",
'body': json.dumps(message, default=str)
}

AWS Lambda validation function - how to debug or print variables?

I'm completely new with AWS Lex and AWS Lambda, so it's a beginner question.
I'm trying to create a bot that helps a user choose bikes. The first question is what type of bike the user needs and the answers are men/women/unisex/kids.
I'm trying to write a lambda validation function so that the bot will tell the user when the choice he entered is none of those mentioned above.
The error I receieve is:
An error has occurred: Invalid Lambda Response: Received error
response from Lambda (LambdaRequestId:
471e1df7-034c-46e7-9154-23de79cb16cf; Error: Unhandled)
And this is the code:
import json
def get_slots(intent_request):
return intent_request['currentIntent']['slots']
def valid_bike_type(intent_request):
bike_type=get_slots(intent_request)['BikeType']
bike_types_lst = ['men', 'women', 'unisex', 'kids']
if bike_type is not None and bike_type.lower() not in bike_types_lst:
return build_validation_result(False, 'BikeType', 'We only have bikes for Men/Women/Unisex/Kids.')
return build_validation_result(True, None, None)
def dispatch(intent_request):
intent_name = intent_request['currentIntent']['name']
if intent_name == 'WelcomeIntent':
return valid_bike_type(intent_request)
raise Exception('Intent with name' + intent_name + ' not supported.')
def lambda_handler(event, context):
return dispatch(event)
Thank you very much for any help!
You can just use print as you would usually do on your workstation, e.g.:
def lambda_handler(event, context):
print(event)
return dispatch(event)
The outcome of print, or any other function that prints to stdout, will be in CloudWatch Logs as explained in AWS docs:
Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/.
For Lex, you can also check documentation about Lambda's input and output, or check avaiable blueprints.
found some information on AWS docs about unhandled errors:
I would check first the lambda settings: Memory and Timeout (as the default is only 3 sec).
You can edit these settings using AWS console, in the lambda configuration, under the 'Basic Settings' tab.

AWS Lambda : CodePipeline job never ends, even after retuning sucess

I am working on executing an AWS Lambda code from Code-pipeline. I have given the lambda role full access to EC2 and code-deploy. The commands generally work when I am not triggering them from code-pipeline. When Triggered from code-pipeline, they just keep on running, even though success is sent. What am i doing wrong?
Code :
import boto3
import json
def lambda_handler(event, context):
reservations = boto3.client('ec2').describe_instances()['Reservations']
instances_list = []
process_instance_list = []
command = 'COMMAND TO EXECUTE ON SERVER'
ssm = boto3.client('ssm')
for res in reservations:
instances = res['Instances']
for inst in res['Instances']:
for tag in inst['Tags']:
#print("Tag value is {}".format(tag['Value']))
if tag['Value']=='Ubuntu_Magento':
print("{} {} {}".format(tag['Value'], inst['InstanceId'], inst['LaunchTime']))
instances_list.append(inst)
instances_list.sort(key=lambda x: x['LaunchTime'])
instance_id = instances_list[0]['InstanceId']
ssmresponse = ssm.send_command(InstanceIds=[instance_id], DocumentName='AWS-RunShellScript', Parameters= { 'commands': [command]})
code_pipeline = boto3.client('codepipeline')
job_id = event['CodePipeline.job']['id']
code_pipeline.put_job_success_result(jobId=job_Id)
Any lambda by default has lifespan of 15 mins only, after that it exits no matter what. I think it has something to do with the way you are trigger it.

AWS Lambda/SNS Publish ignore invalid endpoints

i'm sending apple push notifications via AWS SNS via Lambda with Boto3 and Python.
from __future__ import print_function
import boto3
def lambda_handler(event, context):
client = boto3.client('sns')
for record in event['Records']:
if record['eventName'] == 'INSERT':
rec = record['dynamodb']['NewImage']
competitors = rec['competitors']['L']
for competitor in competitors:
if competitor['M']['confirmed']['BOOL'] == False:
endpoints = competitor['M']['endpoints']['L']
for endpoint in endpoints:
print(endpoint['S'])
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
Everything works fine! But when an endpoint is invalid for some reason (at the moment this happens everytime i run a development build on my device, since i get a different endpoint then. This will be either not found or deactivated.) the Lambda function fails and gets called all over again. In this particular case if for example the second endpoint fails it will send the push over and over again to endpoint 1 to infinity.
Is it possible to ignore invalid endpoints and just keep going with the function?
Thank you
Edit:
Thanks to your help i was able to solve it with:
try:
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
except Exception as e:
print(e)
continue
Aws lamdba on failure retries the function till the event expires from the stream.
In your case since the exception on the 2nd endpoint is not handled, the retry mechanism ensures the reexecution of post to the first endpoint.
If you handle the exception and ensure the function successfully ends even when there is a failure, then the retries will not happen.

Categories

Resources