Streamlit on AWS: serverless options? [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed last month.
Improve this question
My goal is to deploy a Streamlit application to an AWS Serverless architecture. Streamlit does not appear to function properly without a Docker container, so the architecture would need to support containers.
From various tutorials, EC2 is the most popular deployment option for Streamlit, which I have no interest in pursuing due to the server management aspect.
AWS Lambda would be my preferred deployment option if viable. I see that Lambda can support containers, but I'm curious what the pros & cons of Lambda vs Fargate is for containerized apps.
My question is: Is Lambda or Fargate better for a serverless deployment of a Streamlit web app?

AWS Lambda:
AWS Lambda can run containers, but those containers have to implement
the Lambda runtime API. Lambda can't run any generic container.
Lambda has a maximum run time (for processing a single request) of 15 minutes. Behind API gateway that maximum time is reduced to 60 seconds.
Lambda isn't running 24/7. Your container would only be started up when a request comes in that the Lambda function needs to process.
Given the nature of how Lambda works, something has to sit in front of Lambda to receive the web requests and route them to the AWS Lambda API. Most commonly this would be AWS API Gateway. So you would have to setup an AWS API Gateway deployment that understands how to route all of your apps API requests to your Lambda function(s). Alternatively you could put an Application Load Balancer in front of your Lambda function.
Fargate (or more appropriately titled "ECS services with Fargate deployments"):
Runs your container 24/7 like a more traditional web server.
Can run pretty much any container.
No time limit on the time to process a single request, although there is a maximum value of 4000 seconds (over 60 minutes) on the load balancer that you would typically use in this configuration.
So in general, it's much easier to take a third-party app or docker image and get it running in ECS/Fargate than it is to get it running in Lambda.

Related

Convert Flask-based API to AWS Lambda

I currently have an AWS EC2 instance exposing a Flask API with blueprints running different things on different ports. I am wondering which is the bast solution architecture-wise in order to convert the endpoint to Lambda. For instance, should I remove the blueprints? If so, how can I call the different functionalities on the different pots?
Here is the python package that you can use to deploy your flask application to AWS lambda with minimal configurations
https://github.com/Miserlou/Zappa

Run python code on AWS service periodically

I need to run some python code on aws platform periodically(probably once a day). Program job is to connect to S3, download some files from bucket, do some calculations, upload results back to S3. This program runs for about 1 hour so I cannot make use of Lambda function as it has a maximum execution time of 900s(15mins).
I am considering to use EC2 for this task. I am planning to setup python code into a startup and execute it as soon as the EC2 instance is powered on. It also shuts down the instance once the task is complete. The periodic restart of this EC2 will be handled by lambda function.
Though this a not a best approach, I want to know any alternatives within aws platform(services other than EC2) that can be best of this job.
Since
If you are looking for other solutions other than lambda and EC2 (which depending on the scenario it fits) you could use ECS (Fargate).
It's a great choice for microservices or small tasks. You build a Docker image with your code (Python, node, etc...), tag it and then you push the image to AWS ECR. Then you build a cluster for that and use the cloudwatch to schedule the task with Cloudwatch or you can call a task directly either using the CLI or another AWS resource.
You don't have time limitations like lambda
You don’t also have to setup the instance, because your dependencies are managed by Dockerfile
And, if needed, you can take advantage of the EBS volume attached to ECS (20-30GB root) and increase from that, with the possibility of working with EFS for tasks as well.
I could point to other solutions, but they are way too complex for the task that you are planning and the goal is always to use the right service for the job
Hopefully this could help!
Using EC2 or Fargate may be significant overkill. Creating a simple AWS Glue job triggered by a Lambda function (running once per day) to do this (pull from S3, open selected files (if required), do some calculations on the files contents, then push results back to S3) using Python and the AWS boto3 library (and other standard Python file-reading libs if necessary) is most likely your easiest route.
See this SO question for an example and solution.
Good luck!

What is a viable Azure host for long running Python API Endpoint?

I have a Python script that takes 20 minutes to run. I need to be able to trigger this script via my Azure .NET application.
I am looking for a possible cloud based host to help me do this. Preferably Azure, but open to other options.
I have tried the following Options:
Azure Functions
Assessment: Found too many limitations on code structure (e.g. have to organize Python files in certain way)
Azure Web App
Assessment: Works to create an endpoint but has timeout issues for long requests
Azure Virtual Machine (VM)
Assessment: I simulated a trigger by scheduling the script frequently on the VM. This is not a bad solution but not ideal either
What other viable options exist?
You can also choose to use Azure Web Jobs to serve this purpose.
It has a setting to specify idle time (WEBJOBS_IDLE_TIMEOUT):
The value must be in seconds, example 3600 which means the idle time before it times out is 1 hour. Note that this option will affect all scheduled web jobs either under web app or azure function.
Reference: https://jtabuloc.wordpress.com/2018/06/05/how-to-avoid-azure-webjob-idle-timeout-exception/

Flask: Difference between google cloud functions and google web deploy

I am a newbie who wants to deploy his flask app using google cloud functions. When I am searching it online, people are telling me to deploy it as a Flask app. I want to ask if there is any difference between those two.
A cloud instance or deploying flask app on google cloud VS cloud serverless function
As described by John and Kolban, Cloud Functions is a single purpose endpoint. You want to perform 1 thing, deploy 1 function.
However, if you want to have a many consistent things, like a microservice, you will have to deploy several endpoints that allow you to perform a CRUD on the same data object. You should prefer to deploy several endpoints (CRUD) and to have the capability to easily reuse class and object definitions and business logic. For this, a Flask webserver is that I recommend (and I prefer, I wrote an article on this).
A packaging in Cloud Run is the best for having a serverless platform and pay-per-use pricing model (and automatic scaling and...).
There is an additional great thing: Cloud Functions request object is based on Flask request object. By the way, and it's that I also present in my article, it's easy to switch from one platform to another one. You only have to choose according with your requirements, your skills,... I also wrote another article on this
If you deploy your Flask app as an application in a Compute Engine VM instance, you are basically configuring a computer and application to run your code. The notion of Cloud Functions relieves you from the chore and toil of having to create and manage the environment in which your program runs. A marketing mantra is "You bring the code, we bring the environment". When using Cloud Functions all you need do is code your application logic. The maintenance of the server, scaling up as load increases, making sure the server is available and much more is taken care of for you. When you run your code in your own VM instance, it is your responsibility to manage the whole environment.
References:
HTTP Functions
Deploying a Python serverless function in minutes with GCP

Serverless AWS Lambdas in a Docker for OnPremises Deployment

I am searching for this for a few days, found some approaches like Serverless or Localstack, but what I would really like to do is be able to code everything using AWS API Gateway and Lambdas for a cloud-based version of my software (which is solved) and not manage my deployments.
Then...
A customer wants to host a copy of it inside its own private network, so... I wanna use the very same Lambda code (which makes no use of other AWS 'magic' services like DynamoDB ... only "regular" dependencies) injecting it into a container running "an API Gateway"-like software (perhaps a python/flask parsing the exported API Gateway config?).
I am willing to build this layer unless a better idea shows up. So I would be able to put my lambdas on a folder lets say "aws_lambda", and my container would know how to transform the HTTP payload to an AWS event payload, import the module, call 'lambda_handler' ... and hopefully that is it. Having another container with MySQL and another with Nginx (emulating CloudFront for static website) and I will be done. The whole solution in a can.
Any suggestions? Am I crazy?
Does anyone know some existing software solution to solve this?
If you are willing to use AWS SAM, the AWS SAM CLI offers what you're looking for.
The AWS SAM CLI implements its own API Gateway equivalent and runs the AWS Lambda functions in Docker containers. While it's mainly intended for testing, I don't see any reason, why you shouldn't be able to use it for your use-case as well.
Besides different serverless plugins and localstack you can try AWS SAM Cli to run local api gateway . The command is start-api https://docs.aws.amazon.com/lambda/latest/dg/test-sam-cli.html . It probably would not scale, never tried myself and it is intended for testing.
Curiosly what you are consider to do (transform lambda into normal flask server is oppozite to zappa, which is serverless package that convert normal flask server into a lambda function and uploads it to AWS). If you succed in your original idea of converting a flask request into lambda event and care to package you code, it can be called unzappa. While zappa is mature and large package, probably it would be easier to 'invert' some light-weight thing like awsgi https://github.com/slank/awsgi/blob/master/awsgi/init.py
#Lovato, I have been using https://github.com/lambci/docker-lambda Which is a docker image that mimics lambda environment, lambci seems to be maintaining a good version of the lambda images for nodejs, java, python, .net and even go lang. So you can technically reuse your entire lambda code in a docker running lambda "like" environment. I call it lambda-like mostly because aws doesn't fully publish every piece of information about how lambda works. But this is the nearest approximation I've seen. I use this for local development and testing of lambda. And I have tested a trail "offline" lambda. Let me know if this works for you.
I do prefer to use the docker files and create my docker images for my use.

Categories

Resources