sharing a buffer of doubles between two python webservers(collector and calculator) over docker-compose
I am trying to simply send a buffer or an array of integers from a python server called collector to another one called calculator. calculator server should perfom simple mathimatical algorithim. This is all a trial. collector and calculator python scripts are runned in docker-compose in two containers and designed to be connected to the same network.
collector python script
app=Flask(__name__)
#app.route('/')
def index():
d={"my_number": list(range(10))}
return jsonify(d)
calculator python script
import requests
r=requests.get('https://collector:5000')
app = Flask(__name__)
#app.route('/')
def index():
numbers_array = r.json()["my_numbers"]
x=numbers_array[1] + numbers_array[2]
return '{}'.format(x)
docker-compose.yml
services:
collector:
build: .
env_file:
- collector.env
ports:
- '5000:5000'
volumes:
- '.:/app'
networks:
- my_network
calculator:
build: ./calculator
depends_on:
- collector
env_file:
- calculator.env
ports:
- '5001:5000'
volumes:
- './calculator:/app'
networks:
- my_network
networks:
my_network:
driver: bridge
Dockerfile for both images is the same
FROM python:2.7-slim
RUN mkdir /app
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
LABEL maintainer="Mahmoud KD"
VOLUME ["/app/public"]
CMD flask run --host=0.0.0.0 --port=5000
when I run the docker-compose up --build, the first server,collector is achievable on my computer host and is working fine. The second server, calculator, fails to connect to collector via request.get. I tried to ping collector from calculator container while the docker-compose is running the two containers and the ping didn't function, it says " executable file not found in PATH: unknown". it seems that the connection of the two containers is not established although while doing inspection of my_network it shows the two containers. Can any body tell me what I am doing wrong. I am very grateful...
Use expose instead
one app on port 5000
other on port 5001
docker-compose:
app1:
expose:
- 5000
app2:
expose:
- 5001
make sure you run apps with ip=0.0.0.0
If you want to access app 2 from hostmachine, forward ports
app2:
expose:
- 5001
ports:
- 80:5001
Explanation:
Expose only reveales ports inside docker world. So if you expose container's A port 8888, all other containers will be able to access that container at that port. But you will never reach it from host machine.
Standard procedure is that you forward only one port, that is 80 from security reasons and the rest of traffic is unreachable from outside world
Also change dockerfile. You dont want hardcoded ports
Edit:
Also get rid of this
volumes:
- '.:/app'
It may actually cause extra troubles
Working Example: - it works, but the provided app contains errors
docker-compose.yml
version: '3.5'
services:
collector:
container_name: collector
build:
context: collector/.
ports:
- '80:5555'
expose:
- '5555'
calculator:
container_name: calculator
build:
context: calculator/.
depends_on:
- collector
expose:
- 6666
ports:
- '81:6666'
volumes:
- './calculator:/app'
You can access both endpoints on ports 80 and 81. Communication between both endpoints are hidden from us and its on 5555 and 6666. If you close 81(or 80), you can access the other endpoint only as 'proxy'
Related
I am new to the docker world and I have some issues regarding how to connect 2 docker services tougher.
I am using https://memgraph.com/ as my database and when I am running it locally I am running it like this
docker run -it -p 7687:7687 -p 3000:3000 memgraph/memgraph-platform
I wrote my program which is going to connect to the database using mgclient and when I am running it locally everything is working fine.
Now I am trying to put in inside the docker container and running it using docker-compose.yaml
My docker-compose.yaml is:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
ports:
- "7687:7687"
- "3000:3000"
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
and when I am trying to run it with this command:
docker-compose up
I am getting an error regarding the connection to the server. Could anyone tell me what I am missing regarding the docker-compose.yaml?
How does your my_app connect to the database?
Are you using a connection string of the form localhost:7687 (or perhaps localhost:3000)? This would work locally because you are publishing (--publish=7687:7687 --publish=3000:3000) the container's ports 7687 and 3000 to the host port's (using the same ports).
NOTE You can remap ports when your docker run. For example, you could --publish=9999:7686 and then you would need to use port 9999 on your localhost to access the container's port 7687.
When you combine the 2 containers using Docker Compose, each container is given a name that matches the service name. In this case, your Memgraph database is called memgraph (matching the service name).
Using Docker Compose, localhost takes on a different mean. From my_app, localhost is my_app. So, using localhost under Docker Compose, my_app would try connecting to itself not the database.
Under Docker Compose, for my_app (the name for your app), you need to refer to Memgraph by its service name (memgraph). The ports will be unchanged as both 7687 and 3000 (whichever is correct).
NOTE The ports statement in your Docker Compose config is possibly redundant *unless you want to be able to access the database from your (local)host (which you may for debugging). From a best practice standpoint, once my_app is able to access the database correctly, you don't need to expose the database's ports to the host.
Update
It is good practice to externalize configuration (from your app). So that you can configure your app dynamically. An easy way to do this is to use environment variables.
For example:
main.py:
import os
conn = connect(
host=os.getenv("HOST"),
port=os.getenv("PORT"),
)
Then, when you run under e.g. Docker, you need to set these values:
docker run ... --env=HOST="localhost" --env=PORT="7687" ...
And under Docker Compose, you can:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
environment:
HOST: memgraph
PORT: 7687
I have created the following Docker containers to run zookeeper, kafka, ksql, and ksql-cli as well. When I am running the command docker-compose exec ksqldb-cli ksql http://ksqldb-server:8088 from the same machine where Docker is running, ksql-cli can access the ksql-server just fine.
However, I want to have access to the ksql-server outside of the same machine but using a different laptop under the same local network. How do I do that?
Here's the relevant docker-compose.yml file:
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
networks:
- kafka_network
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
networks:
- kafka_network
depends_on:
- zookeeper
ports:
- 29092:29092
- 29093:29093
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: EXTERNAL_SAME_HOST://:29092,EXTERNAL_DIFFERENT_HOST://:29093,INTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092,EXTERNAL_DIFFERENT_HOST://192.168.178.218:29093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT,EXTERNAL_DIFFERENT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
ksqldb-server:
image: confluentinc/cp-ksqldb-server:latest
container_name: ksqldb-server
hostname: ksqldb-server
networks:
- kafka_network
depends_on:
- kafka
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "kafka:9092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:latest
container_name: ksqldb-cli
networks:
- kafka_network
depends_on:
- kafka
- ksqldb-server
entrypoint: /bin/sh
tty: true
networks:
kafka_network:
name: kafka_docker_sse
When I try accessing the ksql-server from a different laptop that is under the same local network, I get a connection error/connection refused. I tried accessing the ksqldb-server using the Python ksql-python package.
pip install ksql
from ksql import KSQLAPI
client = KSQLAPI('http://ksql-server:8088')
# OR
# client = KSQLAPI('http://0.0.0.0:8088')
# client = KSQLAPI('http://192.168.178.218:8088')
if __name__ == '__main__':
print(client)
I also tried changing the KSQL_LISTENERS: "http://0.0.0.0:8088" under the ksqldb-server to KSQL_LISTENERS: "http://192.168.178.218:8088" but that doesn't work either.
Any hints would be really helpful as I am currently stuck here for the last two days!
You'll need to keep KSQL_LISTENERS: "http://0.0.0.0:8088". This binds the container to accept all incoming traffic into port 8088
Then, with ports in Compose, the host 8088 traffic is forwarded to the container 8088.
So, for any external client, you need to connect to that hosts LAN / external IP on port 8088. You may need to explicitly allow TCP traffic into that server host's firewall for that port.
This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 1 year ago.
I'm learning Kafka at the moment, and struggling to get my docker-compose configuration set up properly. What I' trying to do is run a broker based on the wurstmeister/kafka image, and then another container that runs a simple python script with kafka-python.
I've been following this tutorial mostly, but I suspect my handling of the ports is a bit of a mess. Here's my docker-compose.yml:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "client-pusher:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
app-python:
build: .
ports:
- "9093:9093"
expose:
- "9093"
- "9092"
depends_on:
- "kafka"
To tell the honest truth, I don't have a clue what I'm doing half the time when it come to ports in Docker.
Using this Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
# set the working directory in the container
WORKDIR /code
# copy the dependencies file to the working directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to the working directory
COPY . .
# command to run on container start
CMD ["python","/code/consumer.py"]
I can make this script spit out some logs:
# consumer.py
import json
from datetime import date
from typing import Optional
import time
import logging
from kafka import KafkaConsumer
from pydantic import BaseModel
class Client(BaseModel):
first_name: str
email: str
group_id: Optional[int] = None
date: date
# consumer = KafkaConsumer(
# 'client-pusher',
# bootstrap_servers=['kafka:9093'],
# auto_offset_reset='earliest',
# enable_auto_commit=True,
# group_id='my-group-id',
# value_deserializer=lambda x: json.loads(x.decode('utf-8'))
# )
count = 0
while True:
# msg_pack = consumer.poll(timeout_ms=500)
logging.warning(f"Hi there {count}")
time.sleep(2)
count += 1
# for tp, messages in msg_pack.items():
# for message in messages:
# client = Client(**message.value)
# print(client)
but when the commented code is uncommented, the connection fails. The
bootstrap_servers=['kafka:9093'],
line results in
kafka.errors.NoBrokersAvailable: NoBrokersAvailable
I feel like there's some magic combination of exposing or configuring the ports properly in the docker-compose file and using them properly in the python script, and/or configuring the service names properly. But I'm lost. Can anyone help?
TLDR; remove all expose's and adjust app-python's ports to something that isn't already referenced. In your code, instead of kafka:9093, use localhost:9092
Two things:
i. For app-python, you're exposing your machines port 9093 (localhost:9093) to the containers port 9093 (app-python:9093). Both containers can't expose the same machine port, so i recommend keeping your kafka container(s) port config a comfortable distance from your apps port (maybe 9092/9093 for kafka + 8080 for your app)
ii. Docker compose puts all the containers listed in the file within the same network. So there's two way to go about it. If you want to run kafka in docker and your python code in your IDE/terminal, hardcode localhost:9092 in your python script. i.e. your code connects to kafka through its external port mapping (OUTSIDE)
If you run it like how you're running it now, with both containers running in the same docker network) I suggest passing an environment variable (or property you can pass in and reference in the code) to app-python with the bootstrap server <Container name>:<INSIDE PORT> --- kafka:9093
Here's an example that I have with Java, where I could run the app (rest) inside or outside docker-compose. If outside, i referenced localhost:9092 but if inside, I referenced like this:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
container_name: kafka_broker_1
image: wurstmeister/kafka
links:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOSTNAME: kafka
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:29092,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock%
rest:
image: rest:latest
container_name: rest
build:
context: rest
dockerfile: Dockerfile
links:
- kafka
environment:
- SPRING_KAFKA_BOOTSTRAP-SERVERS=kafka:29092
- SERVER_PORT=8080
ports:
- "8080:8080"
depends_on:
- kafka
AFAIK expose is only informative (see here). It's all about the ports you define with ports.
Try to connect to the port you defined in ports (for inside and outside), i.e. in your case
bootstrap_servers=['kafka:9092']
And remove all occurrences of connecting to the ports defined as expose, e.g. for KAFKA_LISTENERS.
In one container there Django app running which expose to host port 8000
and another container flask app is running which expose to the host port 8001. So there is a condition where the flask app API end needs to communicate with the Django app API end.
Code
req = requests.get('http://192.168.43.66:8000/api/some')
Error
requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.168.43.66', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc6f96942e0>: Failed to establish a new connection: [Errno 110] Connection timed out'))
But if I change the request URL to some other API end which not running on the container it gets the response.
And each Django API end is working fine if I want to access it through some other external source like postman or browser.
Here is the docker-compose file content
Django App docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
flask app docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
network: 'host'
command: 'python3 main.py'
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
Note: in the project YAML is well formatted so that's not an error.
when you expose a port from container, the host can access it but it doesn't mean other containers can access it too.
you either have to set the network mode to host (which won't work on windows, this is only possible if you're running them on linux).
OR you can run it in a docker-compose and define your own network, here is an example:
version : '3.4'
services:
UI:
container_name: django_app
image: django_image
networks:
my_net:
ipv4_address: 172.26.1.1
ports:
- "8080:8000"
api:
container_name: flask_app
image: flask_image
networks:
my_net:
ipv4_address: 172.26.1.2
networks:
my_net:
ipam:
driver: default
config:
- subnet: 172.26.0.0/16
now your django app can access your flask app on 172.26.1.2
EDIT :
now that you have added your docker-compose files too, you should not create apps in two different docker-compose (that's why you were getting the conflicting ip range error, you were building two networks with same range).
place everything in a single docker-compose, give them ip addresses and you should be fine.
you could make your apps to read ip address of services they relate on from environment variables and then pass these env variables to your container for more flexibility.
I am running a python application as a docker container and in my python application I am using pythons logging class to log execution steps using logger.info, logger.debug and logger.error. The problem with this is the log file is only persistent within the docker container and if the container goes away then the log file is also lost and also that every time I have to view the log file I have to manually copy the container log file to local system.What I want to do is that whatever log is being written to container log file, it should be persistent on the local system - so write to a local system log file or Auto mount the docker log file to local system.
Few things about my host machine:
I run multiple docker containers on the machine.
My sample docker-core file is:
FROM server-base-v1
ADD . /app
WORKDIR /app
ENV PATH /app:$PATH
CMD ["python","-u","app.py"]
My sample docker-base file is:
FROM python:3
ADD ./setup /app/setup
WORKDIR /app
RUN pip install -r setup/requirements.txt
A sample of my docker-compose.yml file is:
`
version: "2"
networks:
server-net:
services:
mongo:
container_name: mongodb
image: mongodb
hostname: mongodb
networks:
- server-net
volumes:
- /dockerdata/mongodb:/data/db
ports:
- "27017:27017"
- "28017:28017"
server-core-v1:
container_name: server-core-v1
image: server-core-v1:latest
depends_on:
- mongo
networks:
- server-net
ports:
- "8000:8000"
volumes:
- /etc/localtime:/etc/localtime:ro
`
Above yml file sample is just a part of my actual yml file. I have multiple server-core-v1 containers(with different names) running parallel with each having their own logging file.
I would also appreciate if there are some better strategies for doing logging in python with docker and make it persistent. I read few articles which mentioned using sys.stderr.write() and sys.stdout.write() but not sure how to use that especially with multiple containers running and logging.
Volumes are what you need.
You can create volumes to bind an internal container folder with a local system folder. So that you can store your logs in a logs folder and map this as a volume to any folder on your local system.
You can specify a volume in the docker-compose.yml file for each service you are creating. See the docs.
Bind-mounts are what you need.
As you can see, bind-mounts are accesible from yours host file system. It is very simmilar to shared folders in VM architecture.
You can simple achieve that with mounting your volume directly to path on your PC.
In yours case:
version: "2"
networks:
server-net:
services:
mongo:
container_name: mongodb
image: mongodb
hostname: mongodb
networks:
- server-net
volumes:
- /dockerdata/mongodb:/data/db
ports:
- "27017:27017"
- "28017:28017"
server-core-v1:
container_name: server-core-v1
image: server-core-v1:latest
depends_on:
- mongo
networks:
- server-net
ports:
- "8000:8000"
volumes:
- ./yours/example/host/path:/etc/localtime:ro
Just replace ./yours/example/host/path with target directory on yours host.
In this scenario, i belive that logger is on server side.
If you are working on windows remember to bind in current user directory!