I built a docker-compose of a simple python3.6 container exposing port 5000. This container run a python server side script waiting for clients to connect. Here are the files:
Dockerfile:
FROM python:3.6-alpine
WORKDIR /app
CMD ["python","serveur.py"]
Docker-compose:
version: '2'
services:
serveur:
build:
context: .
dockerfile: Serveur
ports:
- "127.0.0.1:5000:5000"
volumes:
- "./app:/app"
serveur.py:
#!/usr/bin/python3
import socket
import threading
print("debut du programme")
socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = "0.0.0.0"
port = 5000
socket.bind((host, port))
socket.listen(5)
for i in range(2):
print("ready to connect")
a,b = socket.accept()
print("Client connected")
socket.close()
Here is the issue:
-If I run the docker compose, my client cant connect on the server; the code seems to block.More over, none of the print are showing in the Docker logs. If I take the socket.accept() out of the loop, one client can connect and I see all the print in the logs. If I take the loop out of the code and I just align multiple socket.accept(), well, the code block.
I know the issue is with my Docker settings because if I run this script out of Docker, the serveur.py works perfectly.
Thanks guys for your time.
It turns out that the docker logs are delayed until the python program stop. So I never saw the print because the program, well, never stop. The solution is to
put this env variable in the docker-compose file:
version: '2'
services:
serveur:
build:
context: .
dockerfile: Serveur
environment:
- "PYTHONUNBUFFERED=1"
ports:
- "127.0.0.1:5000:5000"
volumes:
- "./app:/app"
So now I can see the print that confirm connection..
Related
I created a docker-compose file and runned a container with a static IP address (10.5.0.5) on it, The container is running a python script that uses socket to listen on the given ip address and port. This is successfully running without any errors. Now I want to be able to connect to that ip with other python script using socket outside of that docker file.
this python script is on other location in my computer.
Can someone help me achive that goal?
there is a Dockerfile inside 'files' folder that running the python file
When connecting to '127.0.0.1:4000' IP everything working fine.I want that you will need to connect via '10.5.0.5:4000' address
dokcer-compose file:
version: '3'
services:
node1:
container_name: node1
build: ./files # building the Dockerfile and running the python script
volumes:
- ./files:/usr/src/app
ports:
- 4000:4000
networks:
vpcbr:
ipv4_address: 10.5.0.5
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
node1 python code
import socket
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.bind(('0.0.0.0', 4000))
sock.listen(1)
csock, addr = sock.accept()
csock.sendall(b'hey')
If you want to connect to your docker from outside (at least from another computer). You need to connect using the IP of the computer running docker. To find out the address use "ip a" or "ifconfig" for Linux or "ipconfig" for Windows.
Also, to make your script able to listen outside requests you can set:
sock.bind(('0.0.0.0', 4000))
This will make your script listen for requests from all interfaces.
To connect to node1 from local you can use "node1:4000"
I have a mosquitto mqtt brocker running in docker. I am starting it is a docker compose file. Now I am trying to connect with the broker, it was working locally. When I try to connect as a docker container it is not working although I have changed the host/brocker address from local to compose image name. How can I make it work ?
Here What I have tried
Docker compose ( edited )
version: '3.5'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdatapg:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
mosquitto:
image: eclipse-mosquitto
networks:
- postgres
ports:
- "1883:1883"
volumes:
- ./conf:/mosquitto/conf
- ./data:/mosquitto/data
- ./log:/mosquitto/log
app:
restart: always
build: .
depends_on:
- db
networks:
- postgres
networks:
postgres:
driver: bridge
volumes:
pgdatapg:
and part of my python
broker = "mosquitto"
port = 1883
topic = "py/mqtt/test"
def connect_mqtt():
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print("Failed to connect, return code %d\n", rc)
client = mqtt_client.Client(client_id)
client.on_connect = on_connect
client.connect(broker, port)
return client
Here is the conf file
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
listener 1883
## Authentication ##
allow_anonymous false
password_file /mosquitto/conf/mosquitto.conf
I am getting the following error
| ConnectionRefusedError: [Errno 111] Connection refused
When running with docker compose, the containers started as services by default are placed on dedicated docker overlay network named after the project (which defaults to the dir name the docker-compose.yml file is held in) e.g. network called foo_default
https://docs.docker.com/compose/networking/
Services are only accessible from other containers that are connected to the same network (and to the host via what ever ports are exposed).
So if you only have mosquitto in the docker-compose.yml then no other containers will be able to connect to it. If you include the container that the python is running in as a service in the compose file then it will be able to connect.
You can also change the networks containers connect to in the compose file.
https://docs.docker.com/compose/networking/#specify-custom-networks
EDIT:
You have forced the mosquitto service to use network_mode: host network so it's not on the same postgres network as the app. Containers can be on multiple networks, but mosquitto should not be bound to the host network to make all this work.
EDIT2:
You are also not setting a username/password in app even though you have require authentication in mosquitto.conf and you are pointing the password file at the config file which just won't work. I suggest you remove the last line of the mosquitto.conf file and set allow_anonymous true.
P.s. I suspect that the mosquitto container currently isn't actually starting due to the last line of the config file.
This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 1 year ago.
I'm learning Kafka at the moment, and struggling to get my docker-compose configuration set up properly. What I' trying to do is run a broker based on the wurstmeister/kafka image, and then another container that runs a simple python script with kafka-python.
I've been following this tutorial mostly, but I suspect my handling of the ports is a bit of a mess. Here's my docker-compose.yml:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "client-pusher:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
app-python:
build: .
ports:
- "9093:9093"
expose:
- "9093"
- "9092"
depends_on:
- "kafka"
To tell the honest truth, I don't have a clue what I'm doing half the time when it come to ports in Docker.
Using this Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
# set the working directory in the container
WORKDIR /code
# copy the dependencies file to the working directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to the working directory
COPY . .
# command to run on container start
CMD ["python","/code/consumer.py"]
I can make this script spit out some logs:
# consumer.py
import json
from datetime import date
from typing import Optional
import time
import logging
from kafka import KafkaConsumer
from pydantic import BaseModel
class Client(BaseModel):
first_name: str
email: str
group_id: Optional[int] = None
date: date
# consumer = KafkaConsumer(
# 'client-pusher',
# bootstrap_servers=['kafka:9093'],
# auto_offset_reset='earliest',
# enable_auto_commit=True,
# group_id='my-group-id',
# value_deserializer=lambda x: json.loads(x.decode('utf-8'))
# )
count = 0
while True:
# msg_pack = consumer.poll(timeout_ms=500)
logging.warning(f"Hi there {count}")
time.sleep(2)
count += 1
# for tp, messages in msg_pack.items():
# for message in messages:
# client = Client(**message.value)
# print(client)
but when the commented code is uncommented, the connection fails. The
bootstrap_servers=['kafka:9093'],
line results in
kafka.errors.NoBrokersAvailable: NoBrokersAvailable
I feel like there's some magic combination of exposing or configuring the ports properly in the docker-compose file and using them properly in the python script, and/or configuring the service names properly. But I'm lost. Can anyone help?
TLDR; remove all expose's and adjust app-python's ports to something that isn't already referenced. In your code, instead of kafka:9093, use localhost:9092
Two things:
i. For app-python, you're exposing your machines port 9093 (localhost:9093) to the containers port 9093 (app-python:9093). Both containers can't expose the same machine port, so i recommend keeping your kafka container(s) port config a comfortable distance from your apps port (maybe 9092/9093 for kafka + 8080 for your app)
ii. Docker compose puts all the containers listed in the file within the same network. So there's two way to go about it. If you want to run kafka in docker and your python code in your IDE/terminal, hardcode localhost:9092 in your python script. i.e. your code connects to kafka through its external port mapping (OUTSIDE)
If you run it like how you're running it now, with both containers running in the same docker network) I suggest passing an environment variable (or property you can pass in and reference in the code) to app-python with the bootstrap server <Container name>:<INSIDE PORT> --- kafka:9093
Here's an example that I have with Java, where I could run the app (rest) inside or outside docker-compose. If outside, i referenced localhost:9092 but if inside, I referenced like this:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
container_name: kafka_broker_1
image: wurstmeister/kafka
links:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOSTNAME: kafka
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:29092,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock%
rest:
image: rest:latest
container_name: rest
build:
context: rest
dockerfile: Dockerfile
links:
- kafka
environment:
- SPRING_KAFKA_BOOTSTRAP-SERVERS=kafka:29092
- SERVER_PORT=8080
ports:
- "8080:8080"
depends_on:
- kafka
AFAIK expose is only informative (see here). It's all about the ports you define with ports.
Try to connect to the port you defined in ports (for inside and outside), i.e. in your case
bootstrap_servers=['kafka:9092']
And remove all occurrences of connecting to the ports defined as expose, e.g. for KAFKA_LISTENERS.
I am working on a localhost django webserver http://localhost:8000, which works fine.
Meanwhile i need ngrok to do the port forwarding, ngrok http 8000, which works fine too.
Then I want to put ngrok, postgres, redis, maildev, etc all in docker containers, all others works fine, except ngrok.
ngrok failed to contain to localhost:8000.
I understand why, i suppose because ngrok is running on a seperate 'server 'and the localhost on that server does not have web server running.
I am wondering how i can fix it.
I tried in my docker-compose file with
network_mode: "host", it is not working (MacOS).
I tried to use host.docker.internal, but as I am a free plan user, ngrok does not allow me to specify a hostname.
any help is appreciated! Thanks.
here is my docker-compose file:
ngrok:
image: wernight/ngrok
ports:
- '4040:4040'
environment:
- NGROK_PORT=8000
- NGROK_AUTH=${NGROK_AUTH_TOKEN}
network_mode: "host"
UPDATE:
stripe has a new tool [stripe-cli][1], which can do the same thing.
just do as below
stripe-cli:
image: stripe/stripe-cli
command: listen --api-key $STRIPE_SECRET_KEY
--load-from-webhooks-api
--forward-to host.docker.internal:8000/api/webhook/
I ended up getting rid of ngrok, using serveo instead to solve the problem,
here is the code, in case anyone run into the same problem
serveo:
image: taichunmin/serveo
tty: true
stdin_open: true
command: "ssh -o ServerAliveInterval=60 -R 80:host.docker.internal:8000 -o \"StrictHostKeyChecking no\" serveo.net"
I was able to get it to work by doing the following:
Instruct Django to bind to port 8000 with the following command: python manage.py runserver 0.0.0.0:8000
Instruct ngrok to connect to the web docker service in my docker compose file by passing in web:8000 as the NGROK_PORT environment variable.
I've pasted truncated versions of my settings below.
docker-compose.yml:
version: '3.7'
services:
ngrok:
image: wernight/ngrok
depends_on:
- web
env_file:
- ./ngrok/.env
ports:
- 4040:4040
web:
build:
context: ./app
dockerfile: Dockerfile.dev
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./app/django-project/settings/.env
ports:
- 8000:8000
volumes:
- ./app/:/app/
And here is the env file referenced above (i.e. ./ngrok/.env):
NGROK_AUTH=your-auth-token-here
NGROK_DEBUG=1
NGROK_PORT=web:8000
NGROK_SUBDOMAIN=(optional)-your-subdomain-here
You can leave out the subdomain and auth fields. I figured this out by looking through their docker entrypoint
I'm trying to learn how to use docker and am having some troubles. I'm using a docker-compose.yaml file for running a python script that connects to a mysql container and I'm trying to use ddtrace to send traces to datadog. I'm using the following image from this github page from datadog
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=invalid_key_but_this_is_fine
ports:
- "127.0.0.1:8126:8126"
And my docker-compose.yaml looks like
version: "3"
services:
ddtrace-test:
build: .
volumes:
- ".:/app"
links:
- ddagent
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=<my key>
ports:
- "127.0.0.1:8126:8126"
So then I'm running the command docker-compose run --rm ddtrace-test python test.py, where test.py looks like
from ddtrace import tracer
#tracer.wrap('test', 'test')
def foo():
print('running foo')
foo()
And when I run the command, I'm returned with
Starting service---reprocess_ddagent_1 ... done
foo
cannot send spans to localhost:8126: [Errno 99] Cannot assign requested address
I'm not sure what this error means. When I use my key and run from local instead of over a docker image, it works fine. What could be going wrong here?
Containers are about isolation so in container "localhost" means inside container so ddtrace-test cannot find ddagent inside his container. You have 2 ways to fix that:
Put network_mode: host in ddtrace-test so he will bind to host's network interface, skipping network isolation
Change ddtrace-test to use "ddagent" host instead of localhost as in docker-compose services can be accessed using theirs names