How is Deploying Flask on AWS Elastic Beanstalk different from running script? - python

What is the difference between deploying a Flask application on an ec2 instance (in other words running your script on any computer) and deploying a Flask application via AWS Elastic Beanstalk? The Flask deployment documentation says that:
While lightweight and easy to use, Flask’s built-in server is not
suitable for production as it doesn’t scale well and by default serves
only one request at a time. Some of the options available for properly
running Flask in production are documented here.
One of the deployment options they recommend is AWS Elastic Beanstalk. When I read through Amazon's explanation of how to deploy a Flask app, however, it seems like they are using the exact same server application as comes built-in to Flask, which for example is single threaded and so cannot handle simultaneous requests. I understand that Elastic Beanstalk allows you to deploy multiple copies, but it still seems to use the built-in Flask server application. What am I missing?

TL;DR Completely different - Elastic Beanstalk does use a sensible WSGI runner that's better than the Flask dev server!
When I read through Amazon's explanation of how to deploy a Flask app, however, it seems like they are using the exact same server application as comes built-in to Flask
Almost, but not quite.
You can confirm that this isn't the case by removing the run-with-built-in-server section yourself - i.e. the following from the example:
if __name__ == "__main__":
# Setting debug to True enables debug output. This line should be
# removed before deploying a production app.
application.debug = True
application.run()
You'll stop being able to run it yourself locally with python application.py but it'll still happily run on EB!
The EB Python platform uses its own WSGI server (Apache with mod_wsgi, last I looked) and some assumptions / config to find your WSGI callable:
From Configuring a Python project for Elastic Beanstalk:
By default, Elastic Beanstalk looks for a file called application.py to start your application. If this doesn't exist in the Python project that you've created, some adjustment of your application's environment is necessary.
If you check out the docs for the aws:elasticbeanstalk:container:python namespace you'll see you can configure it to look elsewhere for your WSGI application:
WSGIPath: The file that contains the WSGI application. This file must have an "application" callable. Default: application.py

Elastic compute resources (AWS and others) generally allow for dynamic load balancing, and start more compute resources as they are needed.
If you deploy on a single ec2 instance, and this instance reaches capacity, your users will experience poor performance. If you deploy elastically, new resources are dynamically added as to ensure smooth performance.

Related

Deploy Flask with Gunicorn but without ngix for internal API?

I have written a small Flask API and want to deploy it on an internal network (no internet access) for production use.
There will only be one client system using the API during normal operation.
I think the best option is to simply use Gunicorn to deploy and and call it a day, however, most articles and posts which are about "how to deploy a Flask app" use gunicorn together with nginx for the frontend.
For Internet deployments this certainly makes sense but is this also the case for my usecase?
Should I still add a nginx frontend?

Should i use gunicorn with a python flaks application if i will put in a docker and then use it in a cloud environment

should i use gunicorn with a python flaks application if i will put in a docker and then use it in a cloud environment.
i have seen a lot of tutorials on how to do a flask application and deploy to a cloud service as a docker Image,
make the flask application
make the last line in docker with:
CMD ["python", "my_app.py"]
push the image and let the cloud service like AWS or azure use their load balancer with the amount of cpu and memeory you want to set as rule before spining another instance.
but i got into this tutorial and is using gunicorn
https://levelup.gitconnected.com/usage-of-gunicorn-for-deploying-python-web-applications-1e296618e1ab
where all the steps are the same, just the last line of the docker would be
CMD ["python" "-m", "gunicorn", "my_app:app"]
now uses the gnunicorn wsgi
and same way push that image to be use in the web application of the cloud.
i understand gunicorn can add more workers and thread as arguments in the cli; what would be the better approach or is one better than other?
Thanks guys.
Yes, you should always use a production server. As covered the the Flask docs, Deploy to Production: Run with a Production Server:
you should not use the built-in development server (flask run). The development server is provided by Werkzeug for convenience, but is not designed to be particularly efficient, stable, or secure.
Using gunicorn (or any other production WSGI server) is a necessary practice for deploying to production for speed, stability, and security. These concerns still persist, even if your app is deployed behind a load-balancer or proxy (like AWS load balancer). Both are necessary (you don't want to expose your WSGI server directly to the open internet, either, it should be behind a proper load balancer / proxy).

Which production server to use for Python applications in Cloud Run?

I want to use GCP cloud run as a technology to run my python flask app, so I have to dockerize it. Most of examples I've seen are either using built in flask server or gunicorn server as an ENTRYPOINT, which gives a warning on a console, that it shouldn't be used on production.
My question is: does it matter with a platform like GCP cloud run which server do I use to run that code? What would be the performance impact of that choice?
You want gunicorn, and you'll need to configure it correctly.
Typically in these setups there will be an external HTTP server proxying requests to your server. So it matters rather less which webserver you're using on the backend, because it's not directly exposed.
That being said, the built-in Flask webserver isn't ideal, so gunicorn would probably be better. You will need to tweak Gunicorn's settings slightly to work correctly in container: logging, heartbeat setting, and parallelism.
See https://pythonspeed.com/articles/gunicorn-in-docker/ for details.

How to deploy Flask Web app in Production server from PyCharm

I am new to Flask Web with Python. I developed a simple Flask with Pycharm. Now I want to deploy it on my college server. I remember when I develop php web app I used to copy it to www folder in var and run it from there.
Can someone explain me how do I deploy python web app in Linux production server.
First, you need to figure out what your server provides from its sysadmin. There are tens of ways to do this, and all of them have different answers.
Given that you have used php on the system before, I'll assume you are using Apache httpd.
First, you would need to have a wsgi provider installed (e.g. mod_wsgi) , and then wire your application into the wsgi provider.
See the mod_wsgi page and flask docs for more information on how to precisely do this.
https://code.google.com/p/modwsgi/
http://flask.pocoo.org/docs/0.10/deploying/
Another option is to have python bring its own webserver and optionally have a proxy redirect / load balance.

Flask unresponsive while uploading file or waiting for other server

My Flask stops responding when uploading files or when collecting data from another server via GET. I assume, the problem is, that Flask is only running on one thread.
How can I change this, so multiple users can use the site?
Flask's development webserver (invoked when you use app.run) is not a production web server.
Quoting the docs:
You can use the builtin server during development, but you should use a full deployment option for production applications. (Do not use the builtin development server in production.)
If you want to use Flask in a production environment take a look at the deployment options suggested by the documentation.
For testing purposes with small applications that are doing slightly complicated things I deploy the code I'm developing behind CherryPy using this snippet. (The only disadvantage of this pattern is you loose access to Werkzeug's debugger.)

Categories

Resources