I've been working on my VPS lately in a goal which is to have nginx handle all traffic in a reverse proxy to containers which contain my projects, which most importantly are flask and ran using gunicorn wsgi
I've tried multiple attempts using Dockerfile, docker-compose, etc...
The closest thing I got was using this post.
And those are the steps I followed to install docker.
Where even then it did not as the example said and wasn't able to see the message displayed by flask but only saw the ngnx installation successful message.
Even if it worked, I was suggested by people on the python server to use gunicorn (which makes sense) and I wasn't able to switch it out, and most importantly to have nginx OUTSIDE the container, where nginx would just handle the traffic into the other containers without being in one, which I also couldn't pull off.
I come from a "heroku" background so It's still a little complicated for me to pull this off, I've watched countless videos and explanations online but I couldn't find anyone that had a good tutorial on my specific case of "gunicorn, nginx, flask, docker".
This is a lot to ask so I'm not really expecting a detailed explanation on how to do this (even though I would REALLY appreciate that), nor a step by step guide of course, but pointing me in the right direction would, like is there something I'm doing that could be done in a more efficient or just better manner? if not then is there anything you suggest I look into? articles, docs links, anything, hit me with it!
I would show the files I ended up with but they were made for nginx to be inside a container which isn't what I want my end result to be and they didn't work anyways so I'm hoping to start from scratch
Those are the steps I was suggested to figure out (I'm still working on them because I'm restarting and taking this slowly, unlike last time):
Figure out the command to run your app using gunicorn
Add that command to your dockerfile, and make sure you can build the container
Expose the container's ports and make sure you can make your requests correctly
Use NGINX to forward those requests
I don't know if this will help you, but for flask with gunicorn inside a container, I suggest you to follow this tutorial : https://mark.douthwaite.io/getting-production-ready-a-minimal-flask-app/
with its source code : https://github.com/markdouthwaite/minimal-flask-api
Personally, it helped me a lot.
Maybe next you want to build the image :
docker build -t image_name .
where the dockerfile is located and then and run the container with something like :
docker run -p 8080:8080 image_name
And then use Nginx to set a proxy path on localhost:8080/
Related
I am running a dockerized django app and I am looking for a way to run (a) directive(s) every time before I build a docker container. More concretely, I would like to run docker-compose -f production.yml run --rm django python manage.py check --deploy each time before I either build or up the production.yml file and stop the build process if any erroroccur. Like a pre-hook.
I know I could achieve this with a bash-script, yet I was wondering if there is a way of doing this inside the docker-compose file. I can't find anything in the docker documentation (except events, but I don't understand if they serve for what I want to achieve) about it and I assume that this is not possible. Yet, maybe it is in fact possible or maybe there is a hacky workaround?
Thanks in advance for any tips.
Currently, this is not possible. There have been multiple requests to add such functionality, but the maintainers do not consider this a good idea.
See:
https://github.com/docker/compose/issues/468
https://github.com/docker/compose/issues/1341
https://github.com/docker/compose/issues/6736
I'm trying to develop a database manager in Django and want to develop and deploy it in Docker. As my IDE, I'd like to continue using PyCharm, but I'm having trouble understanding how it interacts with Docker.
I am new to Docker and its integration in PyCharm. My system runs Windows 10 and Docker for Windows.
I already tried using PyCharm's remote interpreter, but I have to activate the port forwarding manually (using Kitematic), since PyCharm does somehow not forward the exposed port automatically.
I also tried using a "Docker Deployment" run configuration. However, I can't get requests to localhost:8000 to get through to the Django server. All I get are empty response errors.
(Note: The bold issue was addressed in the accepted answer.)
It would really help me to have an explanation of how PyCharm's two options (remote interpreter and docker deployment) really work and ideally have an up-to-date tutorial for setting up Django with it. Unfortunately I could only find outdated tutorials and JetBrain's help pages are either outdated or do not explain it in enough detail.
Could someone help me out and guide me through this or point me to good resources?
Assuming you have the latest Docker (for Mac or for Windows) along with an updated version of PyCharm, you could achieve the port forwarding (binding) this way:
Create a new run configuration
Select your Docker server in the Deployment tab. If nothing shows, create a new one. Test that it actually works by clicking View > Tools Windows > Docker and connecting to the docker server. You should see the existing images and running containers.
In the Container tab, make sure to add the right Ports bindings.
An important note
Make sure that you are running your Django server on 0.0.0.0:8000 and not localhost:8000
I am a python programmer, and server administration was always a bit hard for me to immerse to. I always read tutorials and in practice just repeated the steps each time I set up a new project. I always used uwsgi, but realized that gunicorn is easier for simple projects like mine.
Recently, I successfully set up my first single gunicorn application with the help of this article: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
But what should I do if I want to launch another app with gunicorn? Should I just make another systemd service file, like myproject.service? I'm looking for convenient 'one click' setup, so I can easily transfer my project to another machine, or add more gunicorn applications without much configuration. Or maybe, I should use another process manager like supervisor? What is the best solution for a newbie like me?
Sorry if my question's too dumb, but I'm really trying.
Thank you!
I just came across Docker a couple days ago and have been doing some research, but one thing is still a bit unclear to me.
In a video I watched by the creator of Docker, he likened this utility to a shipping container so that you could guarantee that your stack works as intended once set up inside.
But I'm seeing a lot of container images which are just a single part of the stack, i.e. an nginx image or a uwsgi image.
Basically I want to run a web server using python, flask, nginx, and uwsgi. They're all part of the stack so should they go in a single container or should certain parts be in their own container?
I'll have a MySQL server as well, and this seems more logical to run in its own container.
Apologies if this is a matter of opinion, but to me it feels like there is only one right way to go about this.
Docker containers are treated as deployment units - which means you package an application (or part of an application) and all its dependencies into a docker container that can be deployed independently. Your application could be monolithic where your entire application fits into one container just exposing HTTP end points for the browser to access, or an application that is composed of sub-components that can be independently deployed and managed - something like microservices - that when put together form the complete application. In such a case, each independent sub-component would reside in a container of its own. So, the decision on how many containers and how many processes within a container depends on the composition of your application and the kind of scalability you want to achieve.
Docker containers are meant to run single processes, but of course you can work around it by running process management tools like supervisord. I'm not very familiar with the python stack you are talking about, but I could explain this to you in terms of a stack comprising of Nginx + Node + Redis. I have elaborated on a sample docker workflow with this stack in my blog post as well: http://anandmanisankar.com/posts/docker-container-nginx-node-redis-example/
In my example used in the blog post, Nginx, Node and Redis run on separate containers. The reason being the following:
I want to be able to scale my node application depending on the load. So it makes sense to run it on separate container that I can scale independently.
I run Redis on a separate container which acts as a shared data store for my node containers.
To load balance my node containers I run Nginx - again on a separate container - that can dynamically balance the load across the scaled out node containers. The load balancing configuration can also be dynamically updated based on the state/availability/health of the node containers. It would be ideal if I could implement a service discovery mechanism which dynamically generates the Nginx configuration based on the availability of the containers. So scaling up would just be a matter of adding additional containers, and fault tolerance (failure of some node containers) would be automatically handled as well.
You can find the code behind this docker worflow on my github repo: https://github.com/msanand/docker-workflow
You could try to draw an analogy from this to any other web architecture stack. Hope this helps!
I think you could like this, I made a public (and open source) Docker image with all the bells and whistles that you can use to build a Python Flask web application.
It has uWSGI for running the application, Nginx to serve HTTP and Supervisord to control them, so you don't have to learn how to install and configure all those to build your Python Flask web app.
It seems like uWSGI with Nginx is one of the more robust (and with great performance) ways to deploy a Python web app. Here are the benchmarks: http://nichol.as/benchmark-of-python-web-servers.
There are even some template projects you can use to bootstrap your own. And also, you don't have to clone the full project or something, you can just use it as a base image.
Docker Hub: https://hub.docker.com/r/tiangolo/uwsgi-nginx-flask/
GitHub: https://github.com/tiangolo/uwsgi-nginx-flask-docker
And about the "one process per container" debate, some say that this is one of the key misconceptions when you look at it from a microservices point of view: https://valdhaus.co/writings/docker-misconceptions/
As others here have said, officially it is recommended to have one process per container, see
https://docs.docker.com/articles/dockerfile_best-practices/
However, I think there is much debate about how much process isolation you need. One example that is closer to the full stack is the phusion passenger image (phusion/baseimage and phusion/passenger-docker)
that bundles, amongst other things, nginx, ruby and passenger. Some people hate this, others think there is a place for such images. Opinions expressed about this particular image and a linked article discussing it can be found here: https://news.ycombinator.com/item?id=7258009. I think that you can generalize a lot of what is said there to your case and that the variety of arguments supports the variety of image types you have observed.
Personally, I think the full stack vs single process debate is down to the requirements of what you are trying to achieve. If you worry about scalability the single process paradigm might be better for you. If you care about quickly bringing up a dev environment, it could be more straight forward to create/take a container that feels a bit more like a virtual machine.
I have a Django project with 1 app that is working locally and I am trying to make it work on the server but I imagine I am still missing something...
The steps I have followed are:
1) create a virtualnev
2) Install django and the libraries I need
3) copy my local project to the server, keeping the same directory structure
4) create the file passenger_wsgi.py (python passenger_wsgi.py did not return any error)
After this do I need to do anything like python manage.py runserver? Or with this I should be already able to see the site through mydomain/my project/ my app (when I do ot just get an error 404)?
I have read the django book and followed the tutorial, but this part is not well described anywhere...
Thanks in advance for any help!
Deployment is explained in the documentation.
You need to actually serve your application with some kind of HTTP server and something to run your python code. Some of the possible combinations are:
nginx with uWSGI
Any web server as reverse proxy with gunicorn
Apache with mod_wsgi
Your hosting service may or may not give you the choice or even the possibilty to do this.
There is a list of Django friendly hosters in the Django wiki.
Well firstly you might want to go through the previously asked questions.
When you are deploying using passenger you do not need to run manage.py runserver , etc. The passenger_wsgi file should take care of it.
You might want to check this link out, in the first answer also contains link to Dreamhost which details quite extensively on how to achieve the same.
Visit Deploying Django app using passenger
From my personal experience I found Nginx and uwsgi setup to be much more easier to handle and work and debug in the logs, but this is subjective to your needs and platform you may have.