I have a mosquitto mqtt brocker running in docker. I am starting it is a docker compose file. Now I am trying to connect with the broker, it was working locally. When I try to connect as a docker container it is not working although I have changed the host/brocker address from local to compose image name. How can I make it work ?
Here What I have tried
Docker compose ( edited )
version: '3.5'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdatapg:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
mosquitto:
image: eclipse-mosquitto
networks:
- postgres
ports:
- "1883:1883"
volumes:
- ./conf:/mosquitto/conf
- ./data:/mosquitto/data
- ./log:/mosquitto/log
app:
restart: always
build: .
depends_on:
- db
networks:
- postgres
networks:
postgres:
driver: bridge
volumes:
pgdatapg:
and part of my python
broker = "mosquitto"
port = 1883
topic = "py/mqtt/test"
def connect_mqtt():
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print("Failed to connect, return code %d\n", rc)
client = mqtt_client.Client(client_id)
client.on_connect = on_connect
client.connect(broker, port)
return client
Here is the conf file
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
listener 1883
## Authentication ##
allow_anonymous false
password_file /mosquitto/conf/mosquitto.conf
I am getting the following error
| ConnectionRefusedError: [Errno 111] Connection refused
When running with docker compose, the containers started as services by default are placed on dedicated docker overlay network named after the project (which defaults to the dir name the docker-compose.yml file is held in) e.g. network called foo_default
https://docs.docker.com/compose/networking/
Services are only accessible from other containers that are connected to the same network (and to the host via what ever ports are exposed).
So if you only have mosquitto in the docker-compose.yml then no other containers will be able to connect to it. If you include the container that the python is running in as a service in the compose file then it will be able to connect.
You can also change the networks containers connect to in the compose file.
https://docs.docker.com/compose/networking/#specify-custom-networks
EDIT:
You have forced the mosquitto service to use network_mode: host network so it's not on the same postgres network as the app. Containers can be on multiple networks, but mosquitto should not be bound to the host network to make all this work.
EDIT2:
You are also not setting a username/password in app even though you have require authentication in mosquitto.conf and you are pointing the password file at the config file which just won't work. I suggest you remove the last line of the mosquitto.conf file and set allow_anonymous true.
P.s. I suspect that the mosquitto container currently isn't actually starting due to the last line of the config file.
Related
I have created the following Docker containers to run zookeeper, kafka, ksql, and ksql-cli as well. When I am running the command docker-compose exec ksqldb-cli ksql http://ksqldb-server:8088 from the same machine where Docker is running, ksql-cli can access the ksql-server just fine.
However, I want to have access to the ksql-server outside of the same machine but using a different laptop under the same local network. How do I do that?
Here's the relevant docker-compose.yml file:
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
networks:
- kafka_network
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
networks:
- kafka_network
depends_on:
- zookeeper
ports:
- 29092:29092
- 29093:29093
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: EXTERNAL_SAME_HOST://:29092,EXTERNAL_DIFFERENT_HOST://:29093,INTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092,EXTERNAL_DIFFERENT_HOST://192.168.178.218:29093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT,EXTERNAL_DIFFERENT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
ksqldb-server:
image: confluentinc/cp-ksqldb-server:latest
container_name: ksqldb-server
hostname: ksqldb-server
networks:
- kafka_network
depends_on:
- kafka
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "kafka:9092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:latest
container_name: ksqldb-cli
networks:
- kafka_network
depends_on:
- kafka
- ksqldb-server
entrypoint: /bin/sh
tty: true
networks:
kafka_network:
name: kafka_docker_sse
When I try accessing the ksql-server from a different laptop that is under the same local network, I get a connection error/connection refused. I tried accessing the ksqldb-server using the Python ksql-python package.
pip install ksql
from ksql import KSQLAPI
client = KSQLAPI('http://ksql-server:8088')
# OR
# client = KSQLAPI('http://0.0.0.0:8088')
# client = KSQLAPI('http://192.168.178.218:8088')
if __name__ == '__main__':
print(client)
I also tried changing the KSQL_LISTENERS: "http://0.0.0.0:8088" under the ksqldb-server to KSQL_LISTENERS: "http://192.168.178.218:8088" but that doesn't work either.
Any hints would be really helpful as I am currently stuck here for the last two days!
You'll need to keep KSQL_LISTENERS: "http://0.0.0.0:8088". This binds the container to accept all incoming traffic into port 8088
Then, with ports in Compose, the host 8088 traffic is forwarded to the container 8088.
So, for any external client, you need to connect to that hosts LAN / external IP on port 8088. You may need to explicitly allow TCP traffic into that server host's firewall for that port.
I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.
I have done ssh port forwarding which listens to 2000 in my local. So the service is running on localhost:2000, it is Neo4J server. In my local, I am able to connect with Neo4J using Neo4J desktop by giving bolt://localhost:2000 and can look at the datas.
but i am not able to connect with host bolt://localhost:2000 in docker container.
I looked at the answers here
From inside of a Docker container, how do I connect to the localhost of the machine?
I added extra hosts in docker-compose.yml
flask_service:
build:
context: ./project
dockerfile: my_dockerfile
container_name: flask_container
stdin_open: true
tty: true
ports:
- 5000:5000
extra_hosts:
- "myhost:175.1.344.136"
175.1.344.136 being my host IP
and I have used both bolt://175.1.344.136:2000 and bolt://myhost:2000 inside container but it is not connecting. Also i want to know which is the right way bolt://175.1.344.136:2000 or bolt://myhost:2000
I get an error
2021-05-25T10:53:55+0000.060 [INFO] neo4j_proxy.__init__:74 (8:Thread-11) - NEO4J endpoint: bolt://myhost:2000
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 831, in _connect
s.connect(resolved_address)
ConnectionRefusedError: [Errno 111] Connection refused
I am using MacOs, Please help me resolve this.
Thanks in advance!
you need to use the container service name neo4j and default port 7687 to use inside the same docker host.
bolt://neo4j:7687
version: "3.8"
services:
neo4j:
image: neo4j:3.4.4-enterprise
ports:
- 7474:7474
- 7687:7687
my_service:
build:
context: .
environment:
DATABASE_URL: "bolt://neo4j:7687"
I would like to have a python flask application that runs with a postgresql database (psycopg2). So I made this docker-compose file:
version: "3"
services:
web:
depends_on:
- database
container_name: web
build:
context: "."
dockerfile: "docker/Dockerfile.web"
ports:
- 5000:5000
volumes:
- database:/var/run/postgresql
database:
container_name: database
environment:
POSTGRES_PASSWORD: "password"
POSTGRES_USER: "user"
POSTGRES_DB: "products"
image: postgres
expose:
- 5432
volumes:
- database:/var/run/postgresql
volumes:
database:
In my app.py I try to connect to postgres like this:
conn = psycopg2.connect(database="products", user="user", password="password", host="database", port="5432")
When I run docker-compose up I get the following error:
"Is the server running on host "database" (172.21.0.2) and accepting TCP/IP connections on port 5432?"
I don't know where I have mistaken here.
The container "database" exposes its port 5432.
Both containers are on the same network which is "web_app_default".
The socket file existes in /var/run/postgresql directory on "web" container.
Any ideas ?
Thanks for replies and have a nice day.
I think what happened is that even though you have the flag depends_on set to database, that only means that the web container will start after database container starts. However, for the first time, the database will generally take quite some time to set up and when your web server is up, the database is still not ready to accept the connection.
2 ways to work around the problem here:
Easy way with no change in code: run docker-compose up -d (detach mode) and wait for the database to finish initializing. Then run docker-compose up -d again and your web container will now be able to connect to the database.
Second way is to update the web container with restart: always so docker-compose will keep trying to restart your web container until it runs successfully (until the database is ready to accept connection)
version: "3"
services:
web:
depends_on:
- database
...
restart: always
...
I built a docker-compose of a simple python3.6 container exposing port 5000. This container run a python server side script waiting for clients to connect. Here are the files:
Dockerfile:
FROM python:3.6-alpine
WORKDIR /app
CMD ["python","serveur.py"]
Docker-compose:
version: '2'
services:
serveur:
build:
context: .
dockerfile: Serveur
ports:
- "127.0.0.1:5000:5000"
volumes:
- "./app:/app"
serveur.py:
#!/usr/bin/python3
import socket
import threading
print("debut du programme")
socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = "0.0.0.0"
port = 5000
socket.bind((host, port))
socket.listen(5)
for i in range(2):
print("ready to connect")
a,b = socket.accept()
print("Client connected")
socket.close()
Here is the issue:
-If I run the docker compose, my client cant connect on the server; the code seems to block.More over, none of the print are showing in the Docker logs. If I take the socket.accept() out of the loop, one client can connect and I see all the print in the logs. If I take the loop out of the code and I just align multiple socket.accept(), well, the code block.
I know the issue is with my Docker settings because if I run this script out of Docker, the serveur.py works perfectly.
Thanks guys for your time.
It turns out that the docker logs are delayed until the python program stop. So I never saw the print because the program, well, never stop. The solution is to
put this env variable in the docker-compose file:
version: '2'
services:
serveur:
build:
context: .
dockerfile: Serveur
environment:
- "PYTHONUNBUFFERED=1"
ports:
- "127.0.0.1:5000:5000"
volumes:
- "./app:/app"
So now I can see the print that confirm connection..