Deploy a Django project to AWS EB using Docker and nginx - python

Currently, I try to deploy a Django project to AWS EB but I'm facing a lot of problems. I could dockerize the project and deploy it on the AWS elastic beanstalk. But when I try to access the site I always see: 502 Bad Gateway. Locally, the project runs smoothly. I am not really into nginx and I have no idea how to solve this problem.
This is my project structure:
This is my Dockerfile:
# Creating image based on official python3 image
FROM python:3
MAINTAINER Jaron Bardenhagen
# Sets dumping log messages directly to stream instead of buffering
ENV PYTHONUNBUFFERED 1
# Creating and putting configurations
RUN mkdir /config
ADD config/app /config/
# Installing all python dependencies
RUN pip install -r /config/requirements.txt
# Open port 8000 to outside world
EXPOSE 8000
# When container starts, this script will be executed.
# Note that it is NOT executed during building
CMD ["sh", "/config/on-container-start.sh"]
# Creating and putting application inside container
# and setting it to working directory (meaning it is going to be default)
RUN mkdir /app
WORKDIR /app
ADD app /app/
This is my docker-compose file:
# File structure version
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_DB_PORT: "5432"
POSTGRES_DB_HOST: "*******"
POSTGRES_PASSWORD: "*******"
POSTGRES_USER: Jaron
POSTGRES_DB: ebdb
# Build from remote dockerfile
# Connect local app folder with image folder, so changes will be pushed to image instantly
# Open port 8000
app:
build:
context: .
dockerfile: config/app/Dockerfile
hostname: app
volumes:
- ./app:/app
expose:
- "8000"
depends_on:
- db
# Web server based on official nginx image
# Connect external 8000 (which you can access from browser)
# with internal port 8000(which will be linked to app port 8000 in configs)
# Connect local nginx configuration with image configuration
nginx:
image: nginx
hostname: nginx
ports:
- "8000:8000"
volumes:
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- app
This is the Dockerrun.aws File:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "******/******:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
]
}
On-container-start.sh file:
# Create migrations based on django models
python manage.py makemigrations
# Migrate created migrations to database
python manage.py migrate
# Start gunicorn server at port 8000 and keep an eye for app code changes
# If changes occur, kill worker and start a new one
gunicorn --reload project.wsgi:application -b 0.0.0.0:8000
And here is the file for the nginx settings (app.conf):
# define group app
upstream app {
# balancing by ip
ip_hash;
# define server app
server app:8000;
}
# portal
server {
# all other requests proxies to app
location / {
proxy_pass http://app/;
}
# only respond to port 8000
listen 8000;
# domain localhost
server_name localhost;
}
I really appreciate any kind of help!

First check this question on stack overflow: link to question.
If this doesn't solve your problem try the below suggestion.
Try to replace the postgress database with RDS and make sure your RDS and EB environment are in the same VPC. I had this problem when I tried to deploy django project on EB.
Removing Postgress docker image and connecting it with RDS solved my issue.
For some reason, EB is stopping postgress images.

Related

Docker/ Django/ Postgres - could not translate host name "db" to address: Name or service not known

Few days ago I asked a question about a Postgres error.
I followed your suggestions and they helped a bit but in addition to not solving my problem some new problems arose.
I have a django-postgres app which works locally with no problems. When I try to build a docker image it builds but when I try to set up the container I have the following error:
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
I'll show you my Dockerfile:
# Origin image
FROM python:3.8
RUN apt-get update
# Define directory
RUN mkdir /project
WORKDIR /project
# Install requirements
RUN apt-get install -y vim
RUN python -m pip install --upgrade pip
COPY requirements.txt /project/
RUN pip install -r requirements.txt
COPY . /project/
# Expose some ports
EXPOSE 22 5432 8080 8009 8000
# default command
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
And here is my docker-compose file:
version: "3.3"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
- POSTGRES_NAME=plataforma
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=administrador
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=plataforma
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=administrador
depends_on:
- db
env_file:
- ./plataforma/.env
On settings.py I configure the database on this way:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': env('POSTGRESQL_NAME'),
'USER': env('POSTGRESQL_USER'),
'PASSWORD': env('POSTGRESQL_PASS'),
'HOST': env('POSTGRESQL_HOST'),
'PORT': env('POSTGRESQL_PORT'),
}
}
And this is my .env file:
POSTGRESQL_NAME=plataforma
POSTGRESQL_USER=admin
POSTGRESQL_PASS=administrador
POSTGRESQL_HOST=db
POSTGRESQL_PORT=5432
When I run my app locally I use localhost instead db for POSTGRESQL_HOST.
Now, when I run $ sudo docker-compose run web python manage.py runserver . the image builds and the database container is running, but the app container is stopped. If I run $ docker start container-name it doesn't start.
If I run $ docker run -d --restart always --name new-container-name image-name a new container starts correctly and if I get inside it and try to make django migrations, i have the same error:
/usr/local/lib/python3.8/site-packages/django/core/management/commands/makemigrations.py:105: RuntimeWarning: Got an error checking a consistent migration history performed for database connection 'default': could not translate host name "db" to address: Name or service not known
Maybe I am using the docker-compose file in a wrong way. I've tried to install postgres from Dockerfile direct but there I have the error from my last question:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
Also I tried this solution and this solution but they didn't work either.
So, I am literally lost. I also have read this docker Quickstart but I don't figure out of what I'm doing wrong.
Can anyone help me?
Sorry if I'm misunderstanding some stackoverflow rules, but I'm still figuring out how it works.
Thankyou!
EDIT: Thanks to #DavidMaze 's help we found out what the problem was!
David said:
The postgres image accepts an environment variable POSTGRES_DB to set the initial database name, not POSTGRES_NAME; does changing this (and deleting the ./data/db host directory) help?
So I Changed POSTGRES_NAME variable to POSTGRES_DB and erased /data folder (and purged images and containers list) and ran docker-compose up
At the beginning when I run that command for the first time it seems like it try to setup first the web container and fails (with OperationalError), but when I run the same command again it works without errors.
Thank you so much for the help!

Deploying Django using Nginx Docker Container

Situation: I have a Django Application that I want to deploy, the tools I use for this one are Nginx, Gunicorn, and all of them are inside a docker container using Docker Desktop.
Problem: I'm able to view the django app locally using the IP of my docker, IP of my machine, and Loopback IP. However when I try to access it from my laptop(another machine connected on same wifi), I can't access it.
My Machine: Windows 10, I have already enable the expose of port 80 in the windows firewall inbound and outbound.
Steps Taken: I've tried doing python -m http.server 80 on my machine, and it's working perfectly fine so I'm sure there is something to do maybe on my Hyper-V of docker desktop or maybe nginx configuration
My docker-compose file
version: '3'
services:
dashboard:
build: .
volumes:
- .:/opt/services/dashboard/src
- static_volume:/opt/services/dashboard/src/static
networks: # <-- here
- nginx_network
nginx:
image: nginx:1.13
ports:
- 0.0.0.0:80:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/dashboard/src/static
depends_on:
- dashboard
networks: # <-- here
- nginx_network
networks: # <-- and here
nginx_network:
driver: bridge
volumes:
static_volume: # <-- declare the static volume
My dockerfile
# start from an official image
FROM python:3.6
# arbitrary location choice: you can change the directory
RUN mkdir -p /opt/services/dashboard/src
WORKDIR /opt/services/dashboard/src
# install our two dependencies
RUN pip install gunicorn django requests jira python-dateutil
# copy our project code
COPY . /opt/services/dashboard/src
# expose the port 80
EXPOSE 80
# define the default command to run when starting the container
CMD ["gunicorn", "--bind", ":80", "dashboard.wsgi:application"]
My nginx config file
# first we declare our upstream server, which is our Gunicorn application
upstream dashboard_server {
# docker will automatically resolve this to the correct address
# because we use the same name as the service: "djangoapp"
server dashboard:80;
}
# now we declare our main server
server {
listen 80;
server_name localhost;
location / {
# everything is passed to Gunicorn
proxy_pass http://dashboard_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /opt/services/dashboard/src/static/;
}
}
Here is an image of my folder structure.
Image of folder structure
QUESTION: How do I atleast make it viewable on my laptop which is connected through the same Wifi as my desktop machine? I've tried accesing it using the IP of my machine.
Restarted the router switch and it worked perfectly.

ngrok in docker cannot connect to Django development server

I am working on a localhost django webserver http://localhost:8000, which works fine.
Meanwhile i need ngrok to do the port forwarding, ngrok http 8000, which works fine too.
Then I want to put ngrok, postgres, redis, maildev, etc all in docker containers, all others works fine, except ngrok.
ngrok failed to contain to localhost:8000.
I understand why, i suppose because ngrok is running on a seperate 'server 'and the localhost on that server does not have web server running.
I am wondering how i can fix it.
I tried in my docker-compose file with
network_mode: "host", it is not working (MacOS).
I tried to use host.docker.internal, but as I am a free plan user, ngrok does not allow me to specify a hostname.
any help is appreciated! Thanks.
here is my docker-compose file:
ngrok:
image: wernight/ngrok
ports:
- '4040:4040'
environment:
- NGROK_PORT=8000
- NGROK_AUTH=${NGROK_AUTH_TOKEN}
network_mode: "host"
UPDATE:
stripe has a new tool [stripe-cli][1], which can do the same thing.
just do as below
stripe-cli:
image: stripe/stripe-cli
command: listen --api-key $STRIPE_SECRET_KEY
--load-from-webhooks-api
--forward-to host.docker.internal:8000/api/webhook/
I ended up getting rid of ngrok, using serveo instead to solve the problem,
here is the code, in case anyone run into the same problem
serveo:
image: taichunmin/serveo
tty: true
stdin_open: true
command: "ssh -o ServerAliveInterval=60 -R 80:host.docker.internal:8000 -o \"StrictHostKeyChecking no\" serveo.net"
I was able to get it to work by doing the following:
Instruct Django to bind to port 8000 with the following command: python manage.py runserver 0.0.0.0:8000
Instruct ngrok to connect to the web docker service in my docker compose file by passing in web:8000 as the NGROK_PORT environment variable.
I've pasted truncated versions of my settings below.
docker-compose.yml:
version: '3.7'
services:
ngrok:
image: wernight/ngrok
depends_on:
- web
env_file:
- ./ngrok/.env
ports:
- 4040:4040
web:
build:
context: ./app
dockerfile: Dockerfile.dev
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./app/django-project/settings/.env
ports:
- 8000:8000
volumes:
- ./app/:/app/
And here is the env file referenced above (i.e. ./ngrok/.env):
NGROK_AUTH=your-auth-token-here
NGROK_DEBUG=1
NGROK_PORT=web:8000
NGROK_SUBDOMAIN=(optional)-your-subdomain-here
You can leave out the subdomain and auth fields. I figured this out by looking through their docker entrypoint

Docker Nginx does not listen to browser

I have this docker-compose.yml file:
version: '2'
services:
nginx:
image: nginx:latest
container_name: nz01
ports:
- "8001:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: dz01
depends_on:
- db
volumes:
- ./src:/src
expose:
- "8000"
db:
image: postgres:latest
container_name: pz01
ports:
- "5433:5432"
volumes:
- postgres_database:/var/lib/postgresql/data:Z
volumes:
postgres_database:
external: true
And this dockerfile:
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN mkdir /src
RUN mkdir /static
WORKDIR /src
ADD ./src /src
RUN pip install -r requirements.pip
CMD python manage.py collectstatic --no-input;python manage.py migrate; gunicorn computationalMarketing.wsgi -b 0.0.0.0:8000
The web and postgres server does not return an error log, just the success when I run docker-compose build and docker-compose up -d.
At this moment the three containers are running, but when I go to the browser and navigate to: localhost:8001 it does not work.
It shows the "connection has been restarted" error message.
Despite that, the web server still does not return any error, so I guess that I have everything properly configurated in my Django app. I really believe that the problem is related to Nginx, because when I review the Nginx log (using kinematic) it is still empty.
Why wouldn't Nginx be listening to connections?
Hint:
This error is happening in a new project. I tried to understand whether I have anything wrong, I'm running an old project and it works perfectly. I tried to copy the working project in my new folder and remove all existent containers and then try to run this old project in a new folder and there is the surprise. It does not work now, despite being an exact copy of the project that works in the other folder...
EDIT
In my repo I have a config/nginx folder with the helloworld.conf file:
upstream web {
ip_hash;
server web:8000;
}
server {
location /static/ {
autoindex on;
alias /src/static/;
}
location / {
proxy_pass http://web/;
}
listen 8001;
server_name localhost;
}
Still with the same error... I do not see any log error.
Django container log
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
No migrations to apply.
[2018-11-05 13:00:09 +0000] [8] [INFO] Starting gunicorn 19.7.1
[2018-11-05 13:00:09 +0000] [8] [INFO] Listening at: http://0.0.0.0:8000 (8)
[2018-11-05 13:00:09 +0000] [8] [INFO] Using worker: sync
[2018-11-05 13:00:09 +0000] [11] [INFO] Booting worker with pid: 11
You nginx configure should be like:
upstream web {
ip_hash;
server web:8000;
}
server {
location / {
proxy_pass http://web/;
}
listen 8001;
server_name localhost;
}
Since this kind of problems usually are difficult to debug/reproduce I have created dummy example to just run Django app and serve it via Nginx. You can try to adjust it to your needs. Please forgive me if I have missed something, or done that shouldn't be, but I'm unfamiliar with Django framework.
Dockerfile for Django container:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code && \
pip install django
WORKDIR /code
ADD helloworld /code
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:latest
container_name: nginx
ports:
- "80:80"
volumes:
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./helloworld:/code
expose:
- "8000"
config/nginx/django.conf:
upstream web {
ip_hash;
server web:8000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://web/;
}
}
Django app is inside helloworld folder.
For this example, traffic is simply passed. Proper way would be using Unix sockets instead of ports but again I'm unfamiliar with Django.

How to install and start CouchDB server in a Docker image of a Web Application?

I made a Docker Image of a Web Application which is built on Python and my Web application needs CouchDB server to start before running the programme. Can anyone please tell me how can I install and run CouchDB server in the Dockerfile of this Web Application. My Dockerfile is given below:
FROM python:2.7.15-alpine3.7
RUN mkdir /home/WebDocker
ADD ./Webpage1 /home/WebDocker/Webpage1
ADD ./requirements.txt /home/WebDocker/requirements.txt
WORKDIR /home/WebDocker
RUN pip install -r /home/WebDocker/requirements.txt
RUN apk update && \
apk upgrade && \^M
apk add bash vim sudo
EXPOSE 8080
ENTRYPOINT ["/bin/bash"]
Welcome to SO! I solved it by using Docker-Compose for running a separate CouchDB Container and a separate Python Container. The relevant part of the configuration file docker-compose.yml looks like this:
# This help to avoid routing conflict within virtual machines:
networks:
default:
ipam:
driver: default
config:
- subnet: 192.168.112.0/24
# The CouchDB data is kept in docker volume:
volumes:
couchdb_data:
services:
# The container couchServer uses Dockerfile from the subdirectory CouchDB-DIR
# and it has the hostname 'couchServer':
couchServer:
build:
context: .
dockerfile: CouchDB-DIR/Dockerfile
ports:
- "5984:5984"
volumes:
- type: volume
source: couchdb_data
target: /opt/couchdb/data
read_only: False
- type: volume
source: ${DOCKER_VOLUMES_BASEPATH}/couchdb_log
target: /var/log/couchdb
read_only: False
tty: true
environment:
- COUCHDB_PASSWORD=__secret__
- COUCHDB_USER=admin
python_app:
build:
context: .
dockerfile: ./Python_DIR/Dockerfile
...
In the Docker subnet, the CouchDB can be accessed by http://couchServer:5984 from the Python container. To ensure that the CouchDB is not lost when restarting the container, it is kept in a separate Docker volume couchdb_data.
Use the enviroment-variable DOCKER_VOLUMES_BASEPATH to determine in which directory CouchDB logs. It can be defined in a .env-file.
The network section is only necessary if you have routing problems.

Categories

Resources