Docker-compose Nginx multiple containers flask issue - python

I have a Flask app with a Nginx reverse proxy setup with docker-compose. I can get everything to work in a single container without problems, but I need to launch the staging and production servers on the same machine, so I am trying to migrate my setup to multiple containers with a separate nginx-proxy container. The reverse proxy setup seems to be ok, but when I access the app using the proxy Flask has some issue with the request. I detail below the docker-compose files and the server outputs.
NGINX-PROXY docker-compose.yml
version: "3.5"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- proxy
networks:
proxy:
Flask docker-compose.yml
version: '3.5'
services:
# other services defined, not relevant for the issue
data-api:
environment:
FLASK_ENV: development
VIRTUAL_HOST: app.local
build: ./dataAPI
expose:
- 5000
ports:
- 5000:5000
volumes:
- ./dataAPI:/dataAPI
networks:
- nginx_proxy
networks:
nginx_proxy:
external: true
I added a line in /etc/hosts for app.local.
I spin up first nginx then the app. If I try to access it directly with 0.0.0.0:5000/staging/data the request is served without problems, but if I try to use the proxy with app.local/staging/data the Flask app throws a 404:
Flask log
data-api_1 | 172.20.0.1 - - [30/May/2019 14:13:29] "GET /staging/data/ HTTP/1.1" 200 -
data-api_1 | 172.20.0.2 - - [30/May/2019 14:13:31] "GET /staging/data/ HTTP/1.1" 404 -

It doesn't look like you put the containers on the same network. The nginx_proxy is using a network named proxy while the flask container is using an network named nginx_proxy.
By the way, docker-compose is useful for composing applications requiring multiple containers. Rather than use a separate docker-compose file for each container, this setup might be easier if you put both services in the same docker-compose file. Then you don't even need to setup a separate network as compose creates a default network for the services.
Another note, since you are using an nginx reverse proxy, you probably wouldn't want to map the flask port to the host machine.

Related

Django project on AWS not updating code after git pull

I am deploying a Django project on AWS. I am running Postgres, Redis, Nginx as well as my project on Docker there.
Everything is working fine, but when I change something on my local machine, push changes to git and then pull them on the AWS instance, the code is changing, files are updated but they are not showing on the website. Only the static files are updating automatically (I guess because of Nginx). Here is my docker-compose config:
version: '3.9'
services:
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
postgres:
image: postgres
environment:
- POSTGRES_USER=
- POSTGRES_PASSWORD=
- POSTGRES_DB=
ports:
- "5432:5432"
web:
image: image_name
build: .
restart: always
command: gunicorn project.wsgi:application --bind 0.0.0.0:8000
env_file:
- envs/.env.prod
ports:
- "8000:8000"
volumes:
- ./staticfiles/:/tmp/project/staticfiles
depends_on:
- postgres
- redis
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./staticfiles:/home/app/web/staticfiles
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/logs:/var/log/nginx
- ./certbot/www:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
depends_on:
- web
Can you please tell me what to do?
I tried deleting everything from docker and compose up again but nothing happened.
I looked all over in here but I still don't understand... instance restart is not working as well. I tried cleaning redis cache because I have template caching and still nothing.
After updating the code on the EC2 instance, you need to build a new web docker image from that new code. If you are just restarting things then docker-compose is going to continue to pick up the last docker image you built.
You need to run the following sequence of commands (on the EC2 instance):
docker-compose build web
docker-compose up -d
You are seeing the static files change immediately, without rebuilding the docker image, because you are mapping to those files via docker volume.
I found the issue... it was because I had template caching.
If I remove the cache and do what #MarkB suggested, all is updating.
I don't understand why this happens since I tried flushing all redis cache after changes but I guess it solves my issues.

How to connect 2 docker compontens within the same docker-compose.yaml

I am new to the docker world and I have some issues regarding how to connect 2 docker services tougher.
I am using https://memgraph.com/ as my database and when I am running it locally I am running it like this
docker run -it -p 7687:7687 -p 3000:3000 memgraph/memgraph-platform
I wrote my program which is going to connect to the database using mgclient and when I am running it locally everything is working fine.
Now I am trying to put in inside the docker container and running it using docker-compose.yaml
My docker-compose.yaml is:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
ports:
- "7687:7687"
- "3000:3000"
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
and when I am trying to run it with this command:
docker-compose up
I am getting an error regarding the connection to the server. Could anyone tell me what I am missing regarding the docker-compose.yaml?
How does your my_app connect to the database?
Are you using a connection string of the form localhost:7687 (or perhaps localhost:3000)? This would work locally because you are publishing (--publish=7687:7687 --publish=3000:3000) the container's ports 7687 and 3000 to the host port's (using the same ports).
NOTE You can remap ports when your docker run. For example, you could --publish=9999:7686 and then you would need to use port 9999 on your localhost to access the container's port 7687.
When you combine the 2 containers using Docker Compose, each container is given a name that matches the service name. In this case, your Memgraph database is called memgraph (matching the service name).
Using Docker Compose, localhost takes on a different mean. From my_app, localhost is my_app. So, using localhost under Docker Compose, my_app would try connecting to itself not the database.
Under Docker Compose, for my_app (the name for your app), you need to refer to Memgraph by its service name (memgraph). The ports will be unchanged as both 7687 and 3000 (whichever is correct).
NOTE The ports statement in your Docker Compose config is possibly redundant *unless you want to be able to access the database from your (local)host (which you may for debugging). From a best practice standpoint, once my_app is able to access the database correctly, you don't need to expose the database's ports to the host.
Update
It is good practice to externalize configuration (from your app). So that you can configure your app dynamically. An easy way to do this is to use environment variables.
For example:
main.py:
import os
conn = connect(
host=os.getenv("HOST"),
port=os.getenv("PORT"),
)
Then, when you run under e.g. Docker, you need to set these values:
docker run ... --env=HOST="localhost" --env=PORT="7687" ...
And under Docker Compose, you can:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
environment:
HOST: memgraph
PORT: 7687

Can't connect python (Flask) with Mysql container using docker-compose [duplicate]

I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.

sharing data over docker compose between two python servers

sharing a buffer of doubles between two python webservers(collector and calculator) over docker-compose
I am trying to simply send a buffer or an array of integers from a python server called collector to another one called calculator. calculator server should perfom simple mathimatical algorithim. This is all a trial. collector and calculator python scripts are runned in docker-compose in two containers and designed to be connected to the same network.
collector python script
app=Flask(__name__)
#app.route('/')
def index():
d={"my_number": list(range(10))}
return jsonify(d)
calculator python script
import requests
r=requests.get('https://collector:5000')
app = Flask(__name__)
#app.route('/')
def index():
numbers_array = r.json()["my_numbers"]
x=numbers_array[1] + numbers_array[2]
return '{}'.format(x)
docker-compose.yml
services:
collector:
build: .
env_file:
- collector.env
ports:
- '5000:5000'
volumes:
- '.:/app'
networks:
- my_network
calculator:
build: ./calculator
depends_on:
- collector
env_file:
- calculator.env
ports:
- '5001:5000'
volumes:
- './calculator:/app'
networks:
- my_network
networks:
my_network:
driver: bridge
Dockerfile for both images is the same
FROM python:2.7-slim
RUN mkdir /app
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
LABEL maintainer="Mahmoud KD"
VOLUME ["/app/public"]
CMD flask run --host=0.0.0.0 --port=5000
when I run the docker-compose up --build, the first server,collector is achievable on my computer host and is working fine. The second server, calculator, fails to connect to collector via request.get. I tried to ping collector from calculator container while the docker-compose is running the two containers and the ping didn't function, it says " executable file not found in PATH: unknown". it seems that the connection of the two containers is not established although while doing inspection of my_network it shows the two containers. Can any body tell me what I am doing wrong. I am very grateful...
Use expose instead
one app on port 5000
other on port 5001
docker-compose:
app1:
expose:
- 5000
app2:
expose:
- 5001
make sure you run apps with ip=0.0.0.0
If you want to access app 2 from hostmachine, forward ports
app2:
expose:
- 5001
ports:
- 80:5001
Explanation:
Expose only reveales ports inside docker world. So if you expose container's A port 8888, all other containers will be able to access that container at that port. But you will never reach it from host machine.
Standard procedure is that you forward only one port, that is 80 from security reasons and the rest of traffic is unreachable from outside world
Also change dockerfile. You dont want hardcoded ports
Edit:
Also get rid of this
volumes:
- '.:/app'
It may actually cause extra troubles
Working Example: - it works, but the provided app contains errors
docker-compose.yml
version: '3.5'
services:
collector:
container_name: collector
build:
context: collector/.
ports:
- '80:5555'
expose:
- '5555'
calculator:
container_name: calculator
build:
context: calculator/.
depends_on:
- collector
expose:
- 6666
ports:
- '81:6666'
volumes:
- './calculator:/app'
You can access both endpoints on ports 80 and 81. Communication between both endpoints are hidden from us and its on 5555 and 6666. If you close 81(or 80), you can access the other endpoint only as 'proxy'

ngrok in docker cannot connect to Django development server

I am working on a localhost django webserver http://localhost:8000, which works fine.
Meanwhile i need ngrok to do the port forwarding, ngrok http 8000, which works fine too.
Then I want to put ngrok, postgres, redis, maildev, etc all in docker containers, all others works fine, except ngrok.
ngrok failed to contain to localhost:8000.
I understand why, i suppose because ngrok is running on a seperate 'server 'and the localhost on that server does not have web server running.
I am wondering how i can fix it.
I tried in my docker-compose file with
network_mode: "host", it is not working (MacOS).
I tried to use host.docker.internal, but as I am a free plan user, ngrok does not allow me to specify a hostname.
any help is appreciated! Thanks.
here is my docker-compose file:
ngrok:
image: wernight/ngrok
ports:
- '4040:4040'
environment:
- NGROK_PORT=8000
- NGROK_AUTH=${NGROK_AUTH_TOKEN}
network_mode: "host"
UPDATE:
stripe has a new tool [stripe-cli][1], which can do the same thing.
just do as below
stripe-cli:
image: stripe/stripe-cli
command: listen --api-key $STRIPE_SECRET_KEY
--load-from-webhooks-api
--forward-to host.docker.internal:8000/api/webhook/
I ended up getting rid of ngrok, using serveo instead to solve the problem,
here is the code, in case anyone run into the same problem
serveo:
image: taichunmin/serveo
tty: true
stdin_open: true
command: "ssh -o ServerAliveInterval=60 -R 80:host.docker.internal:8000 -o \"StrictHostKeyChecking no\" serveo.net"
I was able to get it to work by doing the following:
Instruct Django to bind to port 8000 with the following command: python manage.py runserver 0.0.0.0:8000
Instruct ngrok to connect to the web docker service in my docker compose file by passing in web:8000 as the NGROK_PORT environment variable.
I've pasted truncated versions of my settings below.
docker-compose.yml:
version: '3.7'
services:
ngrok:
image: wernight/ngrok
depends_on:
- web
env_file:
- ./ngrok/.env
ports:
- 4040:4040
web:
build:
context: ./app
dockerfile: Dockerfile.dev
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./app/django-project/settings/.env
ports:
- 8000:8000
volumes:
- ./app/:/app/
And here is the env file referenced above (i.e. ./ngrok/.env):
NGROK_AUTH=your-auth-token-here
NGROK_DEBUG=1
NGROK_PORT=web:8000
NGROK_SUBDOMAIN=(optional)-your-subdomain-here
You can leave out the subdomain and auth fields. I figured this out by looking through their docker entrypoint

Categories

Resources