I'm trying to learn how to use docker and am having some troubles. I'm using a docker-compose.yaml file for running a python script that connects to a mysql container and I'm trying to use ddtrace to send traces to datadog. I'm using the following image from this github page from datadog
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=invalid_key_but_this_is_fine
ports:
- "127.0.0.1:8126:8126"
And my docker-compose.yaml looks like
version: "3"
services:
ddtrace-test:
build: .
volumes:
- ".:/app"
links:
- ddagent
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=<my key>
ports:
- "127.0.0.1:8126:8126"
So then I'm running the command docker-compose run --rm ddtrace-test python test.py, where test.py looks like
from ddtrace import tracer
#tracer.wrap('test', 'test')
def foo():
print('running foo')
foo()
And when I run the command, I'm returned with
Starting service---reprocess_ddagent_1 ... done
foo
cannot send spans to localhost:8126: [Errno 99] Cannot assign requested address
I'm not sure what this error means. When I use my key and run from local instead of over a docker image, it works fine. What could be going wrong here?
Containers are about isolation so in container "localhost" means inside container so ddtrace-test cannot find ddagent inside his container. You have 2 ways to fix that:
Put network_mode: host in ddtrace-test so he will bind to host's network interface, skipping network isolation
Change ddtrace-test to use "ddagent" host instead of localhost as in docker-compose services can be accessed using theirs names
Related
I am new to the docker world and I have some issues regarding how to connect 2 docker services tougher.
I am using https://memgraph.com/ as my database and when I am running it locally I am running it like this
docker run -it -p 7687:7687 -p 3000:3000 memgraph/memgraph-platform
I wrote my program which is going to connect to the database using mgclient and when I am running it locally everything is working fine.
Now I am trying to put in inside the docker container and running it using docker-compose.yaml
My docker-compose.yaml is:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
ports:
- "7687:7687"
- "3000:3000"
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
and when I am trying to run it with this command:
docker-compose up
I am getting an error regarding the connection to the server. Could anyone tell me what I am missing regarding the docker-compose.yaml?
How does your my_app connect to the database?
Are you using a connection string of the form localhost:7687 (or perhaps localhost:3000)? This would work locally because you are publishing (--publish=7687:7687 --publish=3000:3000) the container's ports 7687 and 3000 to the host port's (using the same ports).
NOTE You can remap ports when your docker run. For example, you could --publish=9999:7686 and then you would need to use port 9999 on your localhost to access the container's port 7687.
When you combine the 2 containers using Docker Compose, each container is given a name that matches the service name. In this case, your Memgraph database is called memgraph (matching the service name).
Using Docker Compose, localhost takes on a different mean. From my_app, localhost is my_app. So, using localhost under Docker Compose, my_app would try connecting to itself not the database.
Under Docker Compose, for my_app (the name for your app), you need to refer to Memgraph by its service name (memgraph). The ports will be unchanged as both 7687 and 3000 (whichever is correct).
NOTE The ports statement in your Docker Compose config is possibly redundant *unless you want to be able to access the database from your (local)host (which you may for debugging). From a best practice standpoint, once my_app is able to access the database correctly, you don't need to expose the database's ports to the host.
Update
It is good practice to externalize configuration (from your app). So that you can configure your app dynamically. An easy way to do this is to use environment variables.
For example:
main.py:
import os
conn = connect(
host=os.getenv("HOST"),
port=os.getenv("PORT"),
)
Then, when you run under e.g. Docker, you need to set these values:
docker run ... --env=HOST="localhost" --env=PORT="7687" ...
And under Docker Compose, you can:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
environment:
HOST: memgraph
PORT: 7687
This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 1 year ago.
I'm learning Kafka at the moment, and struggling to get my docker-compose configuration set up properly. What I' trying to do is run a broker based on the wurstmeister/kafka image, and then another container that runs a simple python script with kafka-python.
I've been following this tutorial mostly, but I suspect my handling of the ports is a bit of a mess. Here's my docker-compose.yml:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "client-pusher:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
app-python:
build: .
ports:
- "9093:9093"
expose:
- "9093"
- "9092"
depends_on:
- "kafka"
To tell the honest truth, I don't have a clue what I'm doing half the time when it come to ports in Docker.
Using this Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
# set the working directory in the container
WORKDIR /code
# copy the dependencies file to the working directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to the working directory
COPY . .
# command to run on container start
CMD ["python","/code/consumer.py"]
I can make this script spit out some logs:
# consumer.py
import json
from datetime import date
from typing import Optional
import time
import logging
from kafka import KafkaConsumer
from pydantic import BaseModel
class Client(BaseModel):
first_name: str
email: str
group_id: Optional[int] = None
date: date
# consumer = KafkaConsumer(
# 'client-pusher',
# bootstrap_servers=['kafka:9093'],
# auto_offset_reset='earliest',
# enable_auto_commit=True,
# group_id='my-group-id',
# value_deserializer=lambda x: json.loads(x.decode('utf-8'))
# )
count = 0
while True:
# msg_pack = consumer.poll(timeout_ms=500)
logging.warning(f"Hi there {count}")
time.sleep(2)
count += 1
# for tp, messages in msg_pack.items():
# for message in messages:
# client = Client(**message.value)
# print(client)
but when the commented code is uncommented, the connection fails. The
bootstrap_servers=['kafka:9093'],
line results in
kafka.errors.NoBrokersAvailable: NoBrokersAvailable
I feel like there's some magic combination of exposing or configuring the ports properly in the docker-compose file and using them properly in the python script, and/or configuring the service names properly. But I'm lost. Can anyone help?
TLDR; remove all expose's and adjust app-python's ports to something that isn't already referenced. In your code, instead of kafka:9093, use localhost:9092
Two things:
i. For app-python, you're exposing your machines port 9093 (localhost:9093) to the containers port 9093 (app-python:9093). Both containers can't expose the same machine port, so i recommend keeping your kafka container(s) port config a comfortable distance from your apps port (maybe 9092/9093 for kafka + 8080 for your app)
ii. Docker compose puts all the containers listed in the file within the same network. So there's two way to go about it. If you want to run kafka in docker and your python code in your IDE/terminal, hardcode localhost:9092 in your python script. i.e. your code connects to kafka through its external port mapping (OUTSIDE)
If you run it like how you're running it now, with both containers running in the same docker network) I suggest passing an environment variable (or property you can pass in and reference in the code) to app-python with the bootstrap server <Container name>:<INSIDE PORT> --- kafka:9093
Here's an example that I have with Java, where I could run the app (rest) inside or outside docker-compose. If outside, i referenced localhost:9092 but if inside, I referenced like this:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
container_name: kafka_broker_1
image: wurstmeister/kafka
links:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOSTNAME: kafka
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:29092,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock%
rest:
image: rest:latest
container_name: rest
build:
context: rest
dockerfile: Dockerfile
links:
- kafka
environment:
- SPRING_KAFKA_BOOTSTRAP-SERVERS=kafka:29092
- SERVER_PORT=8080
ports:
- "8080:8080"
depends_on:
- kafka
AFAIK expose is only informative (see here). It's all about the ports you define with ports.
Try to connect to the port you defined in ports (for inside and outside), i.e. in your case
bootstrap_servers=['kafka:9092']
And remove all occurrences of connecting to the ports defined as expose, e.g. for KAFKA_LISTENERS.
I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.
I'm trying to get my dockerized python-script to get data from an also dockerized mariadb.
I know this should be possible with networks or links. However, due to links being deprecated (According to the Docker documentation), I'd rather not use links.
docker-compose:
version: "3.7"
services:
[...]
mariadb:
build: ./db
container_name: maria_db
expose:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user
MYSQL_PASSWORD: user
restart: always
networks:
- logrun_to_mariadb
[...]
logrun_engine:
build: ./logrun_engine
container_name: logrun_engine
restart: always
networks:
- logrun_to_mariadb
networks:
logrun_to_mariadb:
external: false
name: logrun_to_mariadb
The logrun_engine container executes a python-script on startup:
import mysql.connector as mariadb
class DBConnector:
def __init__(self, dbname):
self.mariadb_connection = mariadb.connect(host='mariadb', port='3306', user='root', password='root', database=dbname)
self.cursor = self.mariadb_connection.cursor()
def get_Usecases(self):
self.cursor.execute("SELECT * FROM Test")
tests = []
for test in self.cursor:
print(test)
print("Logrun-Engine running...")
test = DBConnector('test_db')
test.get_Usecases()
Whenever I run docker-compose up -d, my logrun_engine logs are full of the error message:
_mysql_connector.MySQLInterfaceError: Can't connect to MySQL server on 'mariadb' (111)
When I run the python script locally and connect to a local mariadb, it works with no problems, so the script should be correct.
Most answers I found concerning this error-message are that the people used localhost or 127.0.0.1 instead of the docker container, which I already have.
I tried with bridged networks, host networks, links etc. but apparently I haven't found the correct thing yet.
Any idea how to connect these two containers?
OK, so I was just too impatient and didn't let mysql start up properly before querying the database, thanks #DanielFarrel for pointing that out.
When I added a 10sec delay in the python script before querying the database, it magically worked...
Sleep maybe one solution. However, it may be problematic in case db goes up slowly.
As an alternative you can use agent that will make sure db is up before connecting to it similar to solution here.
Run:
docker-compose up -d agent
After agent is up you are sure db is up and you app may run:
docker-compose up -d logrun_engine
The solution does use --links, however it can be easily modified to use docker networks.
I have been working with Docker previously using services to run a website made with Django.
Now I would like to know how I should create a Docker to just run Python scripts without a web server and any service related with websited.
An example of normal docker which I am used to work is:
version: '2'
services:
nginx:
image: nginx:latest
container_name: nz01
ports:
- "8001:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: dz01
depends_on:
- db
volumes:
- ./src:/src
expose:
- "8000"
db:
image: postgres:latest
container_name: pz01
ports:
- "5433:5432"
volumes:
- postgres_database:/var/lib/postgresql/data:Z
volumes:
postgres_database:
external: true
How should be the docker-compose.yml file?
Simply remove everything from your Dockerfile that has nothing to do with your script and start with something simple, like
FROM python:3
ADD my_script.py /
CMD [ "python", "./my_script.py" ]
You do not need Docker compose for containerizing a single python script.
The example is taken from this simple tutorial about containerizing Python applications: https://runnable.com/docker/python/dockerize-your-python-application
You can easily overwrite the command specified in the Dockerfile (via CMD) when starting a container from the image. Just append the desired command to your docker run command, e.g:
docker run IMAGE /path/to/script.py
You can easily run Python interactively without even having to build a container:
docker run -it python
If you want to have access to some code you have written within the container, simply change that to:
docker run -it -v /path/to/code:/app: python
Making a Dockerfile is unnecessary for this simple application.
Most Linux distributions come with Python preinstalled. Using Docker here adds significant complexity and I'd pretty strongly advise against Docker just to run a simple script. You can use a virtual environment to isolate a particular Python package's dependencies from the rest of the system.
(There is a pretty consistent stream of SO questions around getting filesystem permissions and user IDs right for scripts that principally want to interact with the host system. Also remember that running docker anything implies root-equivalent permissions. If you don't want Docker's filesystem and user namespace isolation, IMHO it's easier to just not use Docker where it doesn't make sense.)