Convert Flask-based API to AWS Lambda - python

I currently have an AWS EC2 instance exposing a Flask API with blueprints running different things on different ports. I am wondering which is the bast solution architecture-wise in order to convert the endpoint to Lambda. For instance, should I remove the blueprints? If so, how can I call the different functionalities on the different pots?

Here is the python package that you can use to deploy your flask application to AWS lambda with minimal configurations
https://github.com/Miserlou/Zappa

Related

Streamlit on AWS: serverless options? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed last month.
Improve this question
My goal is to deploy a Streamlit application to an AWS Serverless architecture. Streamlit does not appear to function properly without a Docker container, so the architecture would need to support containers.
From various tutorials, EC2 is the most popular deployment option for Streamlit, which I have no interest in pursuing due to the server management aspect.
AWS Lambda would be my preferred deployment option if viable. I see that Lambda can support containers, but I'm curious what the pros & cons of Lambda vs Fargate is for containerized apps.
My question is: Is Lambda or Fargate better for a serverless deployment of a Streamlit web app?
AWS Lambda:
AWS Lambda can run containers, but those containers have to implement
the Lambda runtime API. Lambda can't run any generic container.
Lambda has a maximum run time (for processing a single request) of 15 minutes. Behind API gateway that maximum time is reduced to 60 seconds.
Lambda isn't running 24/7. Your container would only be started up when a request comes in that the Lambda function needs to process.
Given the nature of how Lambda works, something has to sit in front of Lambda to receive the web requests and route them to the AWS Lambda API. Most commonly this would be AWS API Gateway. So you would have to setup an AWS API Gateway deployment that understands how to route all of your apps API requests to your Lambda function(s). Alternatively you could put an Application Load Balancer in front of your Lambda function.
Fargate (or more appropriately titled "ECS services with Fargate deployments"):
Runs your container 24/7 like a more traditional web server.
Can run pretty much any container.
No time limit on the time to process a single request, although there is a maximum value of 4000 seconds (over 60 minutes) on the load balancer that you would typically use in this configuration.
So in general, it's much easier to take a third-party app or docker image and get it running in ECS/Fargate than it is to get it running in Lambda.

Is the FastAPI/Flask required anymore with AWS Lambda Function URLs ? What might I be missing by not using Webframework for Python?

I am building a python web application. My first thought was using FastAPI(which I actually like a lot). But then I realized AWS Lambda now allows API endpoints. What are the advantages LAmbdas have over the FastAPI? And what I might be missing by not using a web development framework ?

How to list all deployed aws resources with aws cdk

I have an AWS Cdk deployer application that deployed multiple resources in AWS. The application uses a config file that acts as an input file and using that it deployed multiple was ecs task in a fargate cluster and placed them behind an application load balancer.
Is there any way to get all the components/AWS services that are being deployed when I run cdk deploy --all. I'm trying to understand without using a separate boto3 function if there is any way which was cdk provides.
After synth, from the cdk.out CloudAssembly:
import aws_cdk.cx_api as cx_api
cloud_assembly = cx_api.CloudAssembly("cdk.out")
resources = [stack.template["Resources"] for stack in cloud_assembly.stacks]
After deploy, with the DescribeStackResources or ListStackResources APIs:
aws cloudformation describe-stack-resources --stack-name MyStack
Both return lists of CloudFormation Resource information, but with different content. The CloudAssembly resources are from the local template generated by cdk synth. The Resources returned from boto3 or the CLI are the deployed cloud resources with resolved names.

Serverless AWS Lambdas in a Docker for OnPremises Deployment

I am searching for this for a few days, found some approaches like Serverless or Localstack, but what I would really like to do is be able to code everything using AWS API Gateway and Lambdas for a cloud-based version of my software (which is solved) and not manage my deployments.
Then...
A customer wants to host a copy of it inside its own private network, so... I wanna use the very same Lambda code (which makes no use of other AWS 'magic' services like DynamoDB ... only "regular" dependencies) injecting it into a container running "an API Gateway"-like software (perhaps a python/flask parsing the exported API Gateway config?).
I am willing to build this layer unless a better idea shows up. So I would be able to put my lambdas on a folder lets say "aws_lambda", and my container would know how to transform the HTTP payload to an AWS event payload, import the module, call 'lambda_handler' ... and hopefully that is it. Having another container with MySQL and another with Nginx (emulating CloudFront for static website) and I will be done. The whole solution in a can.
Any suggestions? Am I crazy?
Does anyone know some existing software solution to solve this?
If you are willing to use AWS SAM, the AWS SAM CLI offers what you're looking for.
The AWS SAM CLI implements its own API Gateway equivalent and runs the AWS Lambda functions in Docker containers. While it's mainly intended for testing, I don't see any reason, why you shouldn't be able to use it for your use-case as well.
Besides different serverless plugins and localstack you can try AWS SAM Cli to run local api gateway . The command is start-api https://docs.aws.amazon.com/lambda/latest/dg/test-sam-cli.html . It probably would not scale, never tried myself and it is intended for testing.
Curiosly what you are consider to do (transform lambda into normal flask server is oppozite to zappa, which is serverless package that convert normal flask server into a lambda function and uploads it to AWS). If you succed in your original idea of converting a flask request into lambda event and care to package you code, it can be called unzappa. While zappa is mature and large package, probably it would be easier to 'invert' some light-weight thing like awsgi https://github.com/slank/awsgi/blob/master/awsgi/init.py
#Lovato, I have been using https://github.com/lambci/docker-lambda Which is a docker image that mimics lambda environment, lambci seems to be maintaining a good version of the lambda images for nodejs, java, python, .net and even go lang. So you can technically reuse your entire lambda code in a docker running lambda "like" environment. I call it lambda-like mostly because aws doesn't fully publish every piece of information about how lambda works. But this is the nearest approximation I've seen. I use this for local development and testing of lambda. And I have tested a trail "offline" lambda. Let me know if this works for you.
I do prefer to use the docker files and create my docker images for my use.

Running a function on EC2 through API Gateway

I want to link Amazon API Gateway to a function in my EC2 instance but am finding little online about hwo to do this.
Currently I have set up the API call as follows:
Can anyone shed any light as to how I could connect the API call to my python function called 'test.py' in the root folder of my EC2 instance
I suppose you might be able do this with the AWS Run Command service, but it is a weird way of doing things. The AWS Service Proxy proxies the AWS API. So telling it to proxy the AWS EC2 service exposes the AWS API to manage EC2 instances. Managing EC2 instances includes things like creating and deleting servers. It does not include things initiating an SSH connection to the server, logging into the server, and then running a command on the server.
The standard way to run a script on a server via API Gateway is to expose that script via a web server on the EC2 server, and then have API Gateway hit the appropriate URL.
API Gateway cannot directly execute a Python function sitting on the file system of your EC2 instance. API Gateway can only interact with EC2 instances via http/https endpoints. If you must run you Python function on an EC2 instance then you'll need to run a web server or application server on your EC2 instance and set it up to execute your Python function when it gets a request on a specific path. Then set-up your API Gateway http integration endpoint to use that path.
If you just need to execute this Python function and don't necessarily need it to run on this EC2 instance, then you could set-up a Lambda function containing your Python function. Then set-up your API Gateway to call the Lambda function. Using the Lambda approach means that you don't need to manage the EC2 instance. Also, for low-volume use cases, Lambda can be much more cost effective than running a dedicated EC2 instance.
you can do it by invoking System manager "Send Command" from API Gateway Integration Request. EC2 instance has to be managed by SSM and instance Role associated with your EC2 instance.

Categories

Resources