I need to code the entire Lambda function creation which can be used to deploy the solution to various environments using github.
Currently my lambda function (.py) is in a script. But the S3 trigger is currently added only through AWS console.
How do i add the event trigger either on S3 bucket or in Lambda function through scripting? I am not allowed to use AWS console, but still want to use the lambda triggers.
THere has to be a way but i can't find a working solution. Any help is really appreciated.
Serverless Framework is what I use, it's simple to build complex services with aws resources and events. Have a look at https://serverless.com/framework/docs/providers/aws/events/s3/
This is all you need:
functions:
users:
handler: mypythonfilename.handler
events:
- s3:
bucket: mybucketname
event: s3:ObjectCreated:*
It basically builds the CloudFormation for you, and deploys the lambda with the serverless deploy command.
Related
How to add configure event if we are uploading as zip
Normally configure event is available as like below
You could invoke your lambda with configure event from AWS console no matter if it is archive or inline-edit.
See the dropdown before Test button, once you have created your event just run the button as usual.
why would you need to configure test events when uploading a lambda? Those are just meant for trigger the lambda for testing. If you want to add events to the lambda on deployment like so:
functions:
createUser: # Function name
handler: handler.createUser # Reference to file handler.js & exported function 'createUser'
events: # All events associated with this function
- http:
path: users/create
method: post
I would take a look at serverless, makes it very simple to deploy resources to AWS.
Serverless
The dependencies for my AWS Lambda function were larger than the allowable limits, so I uploaded them to an s3 bucket. I have seen how to use an s3 bucket as an event for a Lambda function, but I need to use these packages in conjunction with a separate event. The s3 bucket only contains python modules (numpy, nltk, etc.) not the event data used in the Lambda function.
How can I do this?
Event data will come in from whatever event source that you configure. Refer the docs here for the S3 event source.
As for the dependencies themself, you will have to zip the whole codebase (code + dependencies) and use that as a deployment package. You can find detailed intructions on that on the docs. For reference, here the one for NodeJS and Python.
Protip: A better way to manage dependencies is to use Lambda Layer. You can create a layer will all your dependencies and then add it to the function that make use of these. Read more about it here.
If your dependencies are still above the 512MB hard limit of AWS Lambda, you may consider using AWS Elastic File System with Lambda.
With this now you can essentially attach a network storage to your lambda function. I have personally used it to load huge reference files which are over the limit of Lambda's file storage. For a walkthrough you can refer to this article by AWS. To pick the conclusion from the article:
EFS for Lambda allows you to share data across function invocations, read large reference data files, and write function output to a persistent and shared store. After configuring EFS, you provide the Lambda function with an access point ARN, allowing you to read and write to this file system. Lambda securely connects the function instances to the EFS mount targets in the same Availability Zone and subnet.
You can read the Read the announcement here
Edit 1: Added EFS for lambda info.
I created an endpoint in aws sagemaker and it works well, I created a lambda function(python3.6) that takes files from S3, invoke the endpoint and then put the output in a file in S3.
I wonder if I can create the endpoint at every event(a file uploaded in an s3 bucket) and then delete the endpoint
Yes you can Using S3 event notification for object-created and call a lambda for creating endpoint for sagemaker.
This example shows how to make object-created event trigger lambda
https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
You can use python sdk to create endpoint for sagemaker
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_endpoint
But it might be slow for creating endpoint so you may be need to wait.
I am writing a lambda function on Amazon AWS Lambda. It accesses the URL of an EC2 instance, on which I am running a web REST API. The lambda function is triggered by Alexa and is coded in the Python language (python3.x).
Currently, I have hard coded the URL of the EC2 instance in the lambda function and successfully ran the Alexa skill.
I want the lambda function to automatically obtain the IP from the EC2 instance, which keeps changing whenever I start the instance. This would ensure that I don't have to go the code and hard code the URL each time I start the EC2 instance.
I stumbled upon a similar question on SO, but it was unanswered. However, there was a reply which indicated updating IAM roles. I have already created IAM roles for other purposes before, but I am still not used to it.
Is this possible? Will it require managing of security groups of the EC2 instance?
Do I need to set some permissions/configurations/settings? How can the lambda code achieve this?
Additionally, I pip installed the requests library on my system, and I tried uploading a '.zip' file with the structure :
REST.zip/
requests library folder
index.py
I am currently using the urllib library
When I use zip files for my code upload (I currently edit code inline), it can't even accesse index.py file to run the code
You could do it using boto3, but I would advise against that architecture. A better approach would be to use a load balancer (even if you only have one instance), and then use the CNAME record of the load balancer in your application (this will not change for as long as the LB exists).
An even better way, if you have access to your own domain name, would be to create a CNAME record and point it to the address of the load balancer. Then you can happily use the DNS name in your Lambda function without fear that it would ever change.
I have a use case where i want to invoke my lambda function whenever a object has been pushed in S3 and then push this notification to slack.
I know this is vague but how can i start doing so ? How can i basically achieve this ? I need to see the structure
There are a lot of resources available for both integrations (s3+lambda and lambda+slack) so if you can put these together you can make it work.
You can use S3 Event Notifications to trigger a lambda function directly:
http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
Here are some blueprints to integrate lambda with slack:
https://aws.amazon.com/blogs/aws/new-slack-integration-blueprints-for-aws-lambda/
Good luck!
You can use S3 Event Notifications to trigger the lambda function.
In bucket's properties, create a new event notification for an event type of s3:ObjectCreated:Put and set the destination to a Lambda function.
Then for the lambda function, write a code either in Python or NodeJS (or whatever you like) and parse the received event and send it to Slack webhook URL.