I created an endpoint in aws sagemaker and it works well, I created a lambda function(python3.6) that takes files from S3, invoke the endpoint and then put the output in a file in S3.
I wonder if I can create the endpoint at every event(a file uploaded in an s3 bucket) and then delete the endpoint
Yes you can Using S3 event notification for object-created and call a lambda for creating endpoint for sagemaker.
This example shows how to make object-created event trigger lambda
https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
You can use python sdk to create endpoint for sagemaker
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_endpoint
But it might be slow for creating endpoint so you may be need to wait.
Related
I am new to the AWS Lambda. i have a requirement like upload csv file to S3 through Lambda.
this is web based application. from UI User will upload the CSV File and submit. this request should be handled by Lambda function(Python) and upload the csv content to S3 Bucket.
Can anyone help me on this.
If you want to use lambda to upload to S3 you'll need to use API Gateway and map the endpoint to Lambda.
An easier way is to upload directly to S3, bypassing lambda. You should use presigned urls to do the upload.
I need to code the entire Lambda function creation which can be used to deploy the solution to various environments using github.
Currently my lambda function (.py) is in a script. But the S3 trigger is currently added only through AWS console.
How do i add the event trigger either on S3 bucket or in Lambda function through scripting? I am not allowed to use AWS console, but still want to use the lambda triggers.
THere has to be a way but i can't find a working solution. Any help is really appreciated.
Serverless Framework is what I use, it's simple to build complex services with aws resources and events. Have a look at https://serverless.com/framework/docs/providers/aws/events/s3/
This is all you need:
functions:
users:
handler: mypythonfilename.handler
events:
- s3:
bucket: mybucketname
event: s3:ObjectCreated:*
It basically builds the CloudFormation for you, and deploys the lambda with the serverless deploy command.
I am looking for a way to set up a Lambda trigger, where if any files are uploaded into the S3 bucket, the trigger will push/copy the file(s) to the SFTP server.
You have to use the following algorithm:
Log in to AWS console,
Lambda service,
Create a new lambda function (choose a proper runtime);
Then you have to set a trigger (a left pane). If you want to trigger your lambda by s3 bucket then click on s3.
Then you have to configure your trigger (choose your bucket and trigger action).
After you complete all of this steps it's a time to write a handler file.
Do not forget that every single trigger has to be in the same region as lambda.
I have a aws lambda function which will write s3 file metadata information in dynamodb for every object created in s3 bucket, for this I have event trigger on s3 bucket. So i'm planning to automate testing using python. Can any one help out how I can automate this lambda function to test the following using unittest package.
Verify the dynamodb table existency
Validate whether the bucket exists or not in s3 for event trigger.
Verify the file count in s3 bucket and record count in Dynamodb table.
This can be done using moto and unittest. What moto will do is add in a stateful mock for AWS - your code can continue calling boto like normal, but calls won't actually be made to AWS. Instead, moto will build up state in memory.
For example, you could
Activate the mock for DynamoDB
create a DynamoDB table
Add items to the table
Retrieve items from the table and see they exist
If you're building functionality for both DynamoDB and S3, you'd leverage both the mock_s3 and mock_dynamodb2 methods from moto.
I wrote up a tutorial on how to do this (it uses pytest instead of unittest but that should be a minor difference). Check it out: joshuaballoch.github.io/testing-lambda-functions/
I have S3 bucket named 'files'. Every day new file arrives there. Example:
/files/data-01-23-2017--11-33am.txt
/files/data-01-24-2017--10-28am.txt
How would I make a Lambda function and set a trigger to execute one shell script on EC2 when new file arrives?
Example of new file is:
/files/data-01-25-2017--11-43am.txt
Command that I would want to execute on EC2 is (with parameter as new file name that just arrived):
python /home/ec2-user/jobs/run_job.py data-01-25-2017--11-43am.txt
Amazon S3 can be configured to trigger an AWS Lambda function when a new object is created. However, Lambda functions do not have access to your Amazon EC2 instances. It is not an appropriate architecture to use.
Some alternative options (these are separate options, not multiple steps):
Instead of running a command on an Amazon EC2 instance, put your code in the Lambda function (no EC2 instance required). (Best option!)
Configure Amazon S3 to push a message into an Amazon SQS queue. Have your code on the EC2 instance regularly poll the queue. When it receives a message, process the object in S3.
Configure Amazon S3 to send a message to an Amazon SNS topic. Subscribe an end-point of your application (effectively an API) to the SNS queue, so that it receives a message when a new object has been created.