I am searching for this for a few days, found some approaches like Serverless or Localstack, but what I would really like to do is be able to code everything using AWS API Gateway and Lambdas for a cloud-based version of my software (which is solved) and not manage my deployments.
Then...
A customer wants to host a copy of it inside its own private network, so... I wanna use the very same Lambda code (which makes no use of other AWS 'magic' services like DynamoDB ... only "regular" dependencies) injecting it into a container running "an API Gateway"-like software (perhaps a python/flask parsing the exported API Gateway config?).
I am willing to build this layer unless a better idea shows up. So I would be able to put my lambdas on a folder lets say "aws_lambda", and my container would know how to transform the HTTP payload to an AWS event payload, import the module, call 'lambda_handler' ... and hopefully that is it. Having another container with MySQL and another with Nginx (emulating CloudFront for static website) and I will be done. The whole solution in a can.
Any suggestions? Am I crazy?
Does anyone know some existing software solution to solve this?
If you are willing to use AWS SAM, the AWS SAM CLI offers what you're looking for.
The AWS SAM CLI implements its own API Gateway equivalent and runs the AWS Lambda functions in Docker containers. While it's mainly intended for testing, I don't see any reason, why you shouldn't be able to use it for your use-case as well.
Besides different serverless plugins and localstack you can try AWS SAM Cli to run local api gateway . The command is start-api https://docs.aws.amazon.com/lambda/latest/dg/test-sam-cli.html . It probably would not scale, never tried myself and it is intended for testing.
Curiosly what you are consider to do (transform lambda into normal flask server is oppozite to zappa, which is serverless package that convert normal flask server into a lambda function and uploads it to AWS). If you succed in your original idea of converting a flask request into lambda event and care to package you code, it can be called unzappa. While zappa is mature and large package, probably it would be easier to 'invert' some light-weight thing like awsgi https://github.com/slank/awsgi/blob/master/awsgi/init.py
#Lovato, I have been using https://github.com/lambci/docker-lambda Which is a docker image that mimics lambda environment, lambci seems to be maintaining a good version of the lambda images for nodejs, java, python, .net and even go lang. So you can technically reuse your entire lambda code in a docker running lambda "like" environment. I call it lambda-like mostly because aws doesn't fully publish every piece of information about how lambda works. But this is the nearest approximation I've seen. I use this for local development and testing of lambda. And I have tested a trail "offline" lambda. Let me know if this works for you.
I do prefer to use the docker files and create my docker images for my use.
Related
Is it practically possible to simulate AWS environment locally using Moto and Python?
I want to write a aws gluejob that will fetch record from my local database and will upload to S3 bucket for data quality check and later trigger a lambda function for cronjob run using Moto Library using moto.lock_glue decorator.Any suggestion or document would be highly appreciated as I don't see much clue on same.Thank you in advance.
AFAIK, moto is meant to patch boto modules for testing.
I have experience working with LocalStack, a docker you can run locally, and it acts as a live service emulator for most AWS services (some are only available for paying users).
https://docs.localstack.cloud/getting-started/
You can see here which services are supported by the free version.
https://docs.localstack.cloud/user-guide/aws/feature-coverage/
in order to use it, you need to change the endpoint-url to point to the local service running on docker.
As it's a docker, you can incorporate it with remote tests as well e.g., if you're using k8s or a similar orchestrator
I currently have an AWS EC2 instance exposing a Flask API with blueprints running different things on different ports. I am wondering which is the bast solution architecture-wise in order to convert the endpoint to Lambda. For instance, should I remove the blueprints? If so, how can I call the different functionalities on the different pots?
Here is the python package that you can use to deploy your flask application to AWS lambda with minimal configurations
https://github.com/Miserlou/Zappa
I need to run some python code on aws platform periodically(probably once a day). Program job is to connect to S3, download some files from bucket, do some calculations, upload results back to S3. This program runs for about 1 hour so I cannot make use of Lambda function as it has a maximum execution time of 900s(15mins).
I am considering to use EC2 for this task. I am planning to setup python code into a startup and execute it as soon as the EC2 instance is powered on. It also shuts down the instance once the task is complete. The periodic restart of this EC2 will be handled by lambda function.
Though this a not a best approach, I want to know any alternatives within aws platform(services other than EC2) that can be best of this job.
Since
If you are looking for other solutions other than lambda and EC2 (which depending on the scenario it fits) you could use ECS (Fargate).
It's a great choice for microservices or small tasks. You build a Docker image with your code (Python, node, etc...), tag it and then you push the image to AWS ECR. Then you build a cluster for that and use the cloudwatch to schedule the task with Cloudwatch or you can call a task directly either using the CLI or another AWS resource.
You don't have time limitations like lambda
You don’t also have to setup the instance, because your dependencies are managed by Dockerfile
And, if needed, you can take advantage of the EBS volume attached to ECS (20-30GB root) and increase from that, with the possibility of working with EFS for tasks as well.
I could point to other solutions, but they are way too complex for the task that you are planning and the goal is always to use the right service for the job
Hopefully this could help!
Using EC2 or Fargate may be significant overkill. Creating a simple AWS Glue job triggered by a Lambda function (running once per day) to do this (pull from S3, open selected files (if required), do some calculations on the files contents, then push results back to S3) using Python and the AWS boto3 library (and other standard Python file-reading libs if necessary) is most likely your easiest route.
See this SO question for an example and solution.
Good luck!
I have a currently working set of AWS lambda functions.
I want these functions to be hosted on a GitHub repo and to use GitHub actions to automatically deploy on AWS lambda when I push a new version.
Now I would like different behaviour for master branch compared to develop branch. Particularly, pushing to develop should somehow deploy the AWS lambda zipped functions on a ‘no-prod’ setup. I seem to understand this should be achieved through lambda functions version but I could not find a definite answer on the docs or internet with such setup.
Anybody knows how to achieve this ?
So I'm currently working on building the deployer for our AWS Lambda functions.
Since AWS versions all share a configuration, this requires having multiple functions (foo_prod, foo_staging, foo_whatever) that are the various versions of our code instead of using aliases like I want to do.
So my question is:
1) Whether or not there's a sane way to re-deploy code. (IE: Staging to Prod) without downloading it to my desktop first and then re-uploading.
2) Whether or not I'm wrong about that shared configuration bit or whether it's possible to tell under which alias the function is running in the actual Lambda such that I can create multiple environment variables for each environment.
You can deploy lambda functions in a lot of different ways that don't involve downloading and re-uploading code. If you use something like SAM (http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-use-app-spec.html) you can just point to an S3 bucket that holds your code and build functions from that. You can also hook CloudFormation up to git repository like Github or AWS CodeCommit and have it automatically update your functions when you push commits to the repository. And there are other systems like Severless (https://serverless.com) that can abstract and automate deploys in repeatable and manageable ways.
The Lambda's version is available in the context object. You should be able to tell which alias is called by looking at the ARN. ARNs have the alias as a suffix such as:
arn:aws:lambda:aws-region:acct-id:function:helloworld:PROD
Info here: http://docs.aws.amazon.com/lambda/latest/dg/python-context-object.html