In one container there Django app running which expose to host port 8000
and another container flask app is running which expose to the host port 8001. So there is a condition where the flask app API end needs to communicate with the Django app API end.
Code
req = requests.get('http://192.168.43.66:8000/api/some')
Error
requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.168.43.66', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc6f96942e0>: Failed to establish a new connection: [Errno 110] Connection timed out'))
But if I change the request URL to some other API end which not running on the container it gets the response.
And each Django API end is working fine if I want to access it through some other external source like postman or browser.
Here is the docker-compose file content
Django App docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
flask app docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
network: 'host'
command: 'python3 main.py'
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
Note: in the project YAML is well formatted so that's not an error.
when you expose a port from container, the host can access it but it doesn't mean other containers can access it too.
you either have to set the network mode to host (which won't work on windows, this is only possible if you're running them on linux).
OR you can run it in a docker-compose and define your own network, here is an example:
version : '3.4'
services:
UI:
container_name: django_app
image: django_image
networks:
my_net:
ipv4_address: 172.26.1.1
ports:
- "8080:8000"
api:
container_name: flask_app
image: flask_image
networks:
my_net:
ipv4_address: 172.26.1.2
networks:
my_net:
ipam:
driver: default
config:
- subnet: 172.26.0.0/16
now your django app can access your flask app on 172.26.1.2
EDIT :
now that you have added your docker-compose files too, you should not create apps in two different docker-compose (that's why you were getting the conflicting ip range error, you were building two networks with same range).
place everything in a single docker-compose, give them ip addresses and you should be fine.
you could make your apps to read ip address of services they relate on from environment variables and then pass these env variables to your container for more flexibility.
Related
I am trying to run flask, postgres and nginx services with following docker-compose:
version: '3.6'
services:
postgres:
image: postgres:10.5
container_name: postgres
hostname: postgres
user: postgres
ports:
- "5432:5432"
networks:
- db-tier
environment:
CUSTOM_CONFIG: /etc/postgres/postgresql.conf
volumes:
- ./postgres/sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
- ./postgres/postgresql.conf:/etc/postgres/postgresql.conf
command: postgres -c config_file=/etc/postgres/postgresql.conf
restart: always
app:
image: python/app:0.0.1
container_name: app
hostname: app
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
networks:
- app-tier
- db-tier
stdin_open: true
nginx:
image: nginx:1.22
container_name: nginx-reverse-proxy-flask
ports:
- "8080:80"
depends_on:
- app
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- app-tier
networks:
app-tier:
driver: bridge
db-tier:
driver: bridge
This is what app.config["SQLALCHEMY_DATABASE_URI"] equals to postgresql://denis:1234Five#postgres:5432/app
The error after docker-compose up is:
app | psycopg2.OperationalError: could not connect to server: Connection refused
app | Is the server running on host "postgres" (192.168.32.2) and accepting
app | TCP/IP connections on port 5432?
What could cause this type of error? I double checked the container name of postgres service, and it running with this name postgres why flask app doesn't "see" it?
It could be issue with no propper startup probe in your postgres container and docker-compose.yml
Look at for reference how to setup them at Docker - check if postgres is ready
So after postgres starts as container, docker starts your app, but postgres inside of container is not ready yet and you get your error
This issue was resolved with postgres pg_isready adding following lines to postgres service:
healthcheck:
test: ["CMD-SHELL", "sh -c 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}'"]
interval: 10s
timeout: 3s
retries: 3
Took as solution from here Safe ways to specify postgres parameters for healthchecks in docker compose
I am going to lose my mind because I've been stuck on this for 2 days and can't figure out why python wants to keep using 127.0.0.1 instead of the host I have specified. My docker compose snippet is:
# Use root/example as user/password credentials
version: '3.1'
services:
mongo_service:
image: mongo
#command: --default-authentication-plugin=mysql_native_password
command: mongo
restart: always
ports:
- '27017:27017'
cinemas_api:
container_name: cinemas_api
hostname: cinemas_api
build:
context: ./cinemas_api
dockerfile: Dockerfile
links:
- mongo_service
ports:
- 5000:5000
expose:
- '5000'
depends_on:
- mongo_service
booking_api:
container_name: booking_api
hostname: booking_api
build:
context: ./booking_api
dockerfile: Dockerfile
ports:
- 5050:5000
depends_on:
- mongo_service
networks:
db_net:
external: true
#docker-compose -f docker-compose.yaml up --build
then in cinemas.py, I try to connect:
client = MongoClient('mongodb://mongo_service:27017/test')
However, I get the error
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 ::
caused by :: Connection refused :
My docker-compose file is as follows:
version: '3'
services:
db:
image: mongo:4.2
container_name: mongo-db
restart: always
environment:
MONGO_INITDB_DATABASE: VMcluster
ports:
- "16006:27017"
volumes:
- ./initdb.js:/docker-entrypoint-initdb.d/initdb.js
web:
build:
context: .
dockerfile: Dockerfile_Web
command: python manage.py runserver 0.0.0.0:8000
container_name: cluster-monitor-web
volumes:
- .:/vmCluster_service
ports:
- "9900:8000"
depends_on:
- db
cronjobs:
build:
context: .
dockerfile: Dockerfile_Cron
command: ["cron", "-f"]
container_name: cluster-monitor-cron
I want to implement a feature where the user should be able to update the crontab from the Django web. I'm done with Django part i.e python code for backend. But the Django web container is not able to access the crontab container.
how can I make Django web container to access the crontab container and update the crontab?
Im using python-crontab module in django which throws an error :
"[Errno 2] No such file or directory: '/usr/bin/crontab':
'/usr/bin/crontab'"
sharing a buffer of doubles between two python webservers(collector and calculator) over docker-compose
I am trying to simply send a buffer or an array of integers from a python server called collector to another one called calculator. calculator server should perfom simple mathimatical algorithim. This is all a trial. collector and calculator python scripts are runned in docker-compose in two containers and designed to be connected to the same network.
collector python script
app=Flask(__name__)
#app.route('/')
def index():
d={"my_number": list(range(10))}
return jsonify(d)
calculator python script
import requests
r=requests.get('https://collector:5000')
app = Flask(__name__)
#app.route('/')
def index():
numbers_array = r.json()["my_numbers"]
x=numbers_array[1] + numbers_array[2]
return '{}'.format(x)
docker-compose.yml
services:
collector:
build: .
env_file:
- collector.env
ports:
- '5000:5000'
volumes:
- '.:/app'
networks:
- my_network
calculator:
build: ./calculator
depends_on:
- collector
env_file:
- calculator.env
ports:
- '5001:5000'
volumes:
- './calculator:/app'
networks:
- my_network
networks:
my_network:
driver: bridge
Dockerfile for both images is the same
FROM python:2.7-slim
RUN mkdir /app
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
LABEL maintainer="Mahmoud KD"
VOLUME ["/app/public"]
CMD flask run --host=0.0.0.0 --port=5000
when I run the docker-compose up --build, the first server,collector is achievable on my computer host and is working fine. The second server, calculator, fails to connect to collector via request.get. I tried to ping collector from calculator container while the docker-compose is running the two containers and the ping didn't function, it says " executable file not found in PATH: unknown". it seems that the connection of the two containers is not established although while doing inspection of my_network it shows the two containers. Can any body tell me what I am doing wrong. I am very grateful...
Use expose instead
one app on port 5000
other on port 5001
docker-compose:
app1:
expose:
- 5000
app2:
expose:
- 5001
make sure you run apps with ip=0.0.0.0
If you want to access app 2 from hostmachine, forward ports
app2:
expose:
- 5001
ports:
- 80:5001
Explanation:
Expose only reveales ports inside docker world. So if you expose container's A port 8888, all other containers will be able to access that container at that port. But you will never reach it from host machine.
Standard procedure is that you forward only one port, that is 80 from security reasons and the rest of traffic is unreachable from outside world
Also change dockerfile. You dont want hardcoded ports
Edit:
Also get rid of this
volumes:
- '.:/app'
It may actually cause extra troubles
Working Example: - it works, but the provided app contains errors
docker-compose.yml
version: '3.5'
services:
collector:
container_name: collector
build:
context: collector/.
ports:
- '80:5555'
expose:
- '5555'
calculator:
container_name: calculator
build:
context: calculator/.
depends_on:
- collector
expose:
- 6666
ports:
- '81:6666'
volumes:
- './calculator:/app'
You can access both endpoints on ports 80 and 81. Communication between both endpoints are hidden from us and its on 5555 and 6666. If you close 81(or 80), you can access the other endpoint only as 'proxy'
I am building a Python app and I am trying to run it inside Docker. (My works completely in virtualenv with no Docker.)
So, my app consists of config.py file and other files which make everything work, but are in this case unimportant.
My config file looks like this:
SQLALCHEMY_DATABASE_URI = "postgresql+psycopg2://gimtest:gimtest#localhost:5432/gimtest-users"
App requires postgresql to be run before the app starts, so I tried using Docker-Compose. My docker-compose.yml looks like this:
version: '2'
services:
web:
build: ./web
ports:
- "5000:5000"
depends_on:
- postgres
links:
- postgres
- elastic
expose:
- 5000
command:
"python run.py"
postgres:
image: postgres:9.5
expose:
- 5432
environment:
POSTGRES_USER: "gimtest"
POSTGRES_PASSWORD: "gimtest"
POSTGRES_DB: "gimtest-users"
elastic:
image: elasticsearch:2.3
Now the problem is that I can't connect to postgresql database. (At least that's what compile says because it throws an error from python app that no postgresql is running on specified url (up in config.py).
How can I fix the url so that it will work?