I want to update this Lambda function to python3.9 from 3.6. I want to create a test event to make sure everything still works after I do this. I'm new to Lambda and this function was created before my time. Any help or a point in the right direction would be greatly appreciated.
Below is the function code:
I would recommend that you look at recent CloudWatch Logs for this Lambda function. The code is logging the received event (which can be one of 2 types: Create and Update). Try to find a log for each event type. Then fabricate some test events from those logs.
And don't rely on Lambda function test events because they are not persistent (as you have seen). Create your own 'real' tests and revision-control them like code.
Related
I've got a few small Python functions that post to twitter running on AWS. I'm a novice when it comes to Lambda, knowing only enough to get the functions running.
The functions have environment variables set in Lambda with various bits of configuration, such as post frequency and the secret data for the twitter application. These are read into the python script directly.
It's all triggered by an Event Bridge cron job that runs every hour.
I'm wanting to create a test event that will allow me to invoke the function manually, but would like to be able to change the post frequency variable when run like this.
Is there a simple way to change environment variables when running a test event?
That is very much possible and there are multiple ways to do it. One is to use AWS CLI's aws lambda update-function-configuration: https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-configuration.html
Alternatively, depending on programming language that you prefer, you can use AWS SDK that also has a similar method, you can find an example with JS SDK in this doc: https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/javascript_lambda_code_examples.html
We have AWS lambda function (Python) which gets automatically triggered on S3 put object.
It means, when someone put some object in S3 bucket, lambda will get triggered.
How to do the Performance Testing of Lambda which is event based.
I go through the Gatling Tool, and tried to write some scala code , which has invoke lambda function, but we can't use that. Because in our case we can't manual invoke lambda as it's event based.If we have to invoke we have to pass the Payload which is S3 payload (Not Sure how it looks like in case of S3 put object)
We want to performance performance/ efficiency/ load testing of our event based lambda function.
The other approach I was thinking to write some code in Gherkin/feature file and calculate time between (S3 put and Cloudwatch logs) - But that does not looks like performance testing to me, and neither we can generate a reports.
Any suggestion on this...How to achieve this particular scenario.
When you complete this tutorial https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started-hello-world.html you download the AWS SAM CLI and run the commands in order to create a simple AWS hello-world application. When you run the program it triggers what AWS calls a lambda function and at the end of the tutorial you can open it in your browser in the url window using: http://127.0.0.1:3000/hello, if you see a message here that shows curly braces and the words 'hello-world' that means it is successful.
Running the AWS SAM commands generates a lot of boiler plate code which is a bit confusing. This can all be seen inside a code editor. One of them is called event.json, which of course is a JSON object but why is it there? what does it represent in relation to this program? I am trying to understand what this AWS SAM application is ultimately doing and what the files generated mean and represent here.
Can someone simply break down what AWS SAM is doing and the meaning behind the boiler plate code it generates?
Thank you
event.json contain the input your lambda function will get in json format. Regardless of how a lambda is triggered, it will always have 2 fixed parameters: Event and Context. Context contains additional information about the trigger like source, while Event Contains any input parameters that your lambda needs to run.
You can test this out by yourself by editing the event.json and giving your own values. If you open the lambda code file you will see this event object being used in the lambda_handler.
Other boilerplate stuff is your template where you can define the configuration of your lambdas as well as any other services you might use like layers or a database or api gateway.
You also get a requirements.txt file which contains names of any third party libraries that your function requires. These will be packaged along with the code.
Ninad's answer is spot on. I just want to add a practical application of how these json files are used. One way the event.json is used is when you are invoking your lambdas using the command sam local invoke. When you are invoking the lambda locally, you pass the event.json (or what ever you decide to call the file, you will likely have multiples) as a parameter. As Ninad mentioned, the event file has everything your lambda needs to run in terms of input. When the lambdas are hooked up to other services and running live, these inputs would be being feed to your lambda from that service.
I have a StepFunction, first step is to run some Python code in Glue, then generate some data and stored in S3 bucket, then the next step is to look at this file and run a Lambda function.
I want to print out in the Glue python code, to see if the generated file is empty or not, if not empty, continue the nest step. It it's empty, skip the rest of the steps and StepFunction succeeded. My question is that it there any examples how to achieve this? Or is it even possible to decide StepFunction flow like this? (All the AWS resources are managed by Terraform) Thanks.
Yes, you could very well do this. Try an approach (flowchart and not actual stepfunction) as shown below:
The first Lambda function will run the glue job for you. Make sure the state machine knows the output location. Ideally, passing the required data to the Glue Job via the invoking lambda function.
The second lambda function will check if the file has data and return a status, say for example {'data-availability-status': True} or {'data-availability-status': False}
The third state machine is a Choice state that act's on 'data-availability-status' from Check File state. If the status is True, it should go to next steps and if the status is False it should gracefully exit.
There is a function called record_exception() that you can use if you want to send an error to newrelic.
The question is how to record an error when you don't have an exception in hand? I just want to decide somewhere in the code to record an event to newrelic.
To record generic events that are not errors, you can create a custom metric and then use record_custom_metric, like this:
newrelic.agent.record_custom_metric('Custom/MyValue', 42)
Please see the naming reference to make sure you are naming the metric correctly.