Port not exposed but still reachable on internal docker network - python

(I'm having the inverse problem of exposing a port and it's not reachable.)
In my case I have 2 containers on the same network. One is an Alpine Python running a Python Flask app. The other is a barebones Ubuntu 18.04. The services are initialised basically like this:
docker-compose.yml:
version: '3'
services:
pythonflask:
build: someDockerfile # from python:3.6-alpine
restart: unless-stopped
ubuntucontainer:
build: someOtherDockerfile #from ubuntu:18.04
depends_on:
- pythonflask
restart: unless-stopped
The Python Flask app runs on port 5000.
Notice the lack of expose: - 5000 in the docker-compose.yml file.
The problem is that I'm able to get a correct response when cURLing http://pythonflask:5000 from inside ubuntucontainer
Steps:
$ docker exec -it ubuntucontainer /bin/bash
...and then within the container...
root#ubuntucontainer:/# curl http://pythonflask:5000/
...correctly returns my response from the Flask app.
However from my machine running docker:
$ curl http://localhost:5000/
Doesn't return anything (as expected).
As I test different ports, they get automatically exposed each time. What is doing this?

Connectivity between containers is achieved by placing the containers on the same docker network and communicating over the container ip and port (rather than the host published port). So what does expose do then?
Expose is documentation
Expose in docker is used by image creators to document the expected port that the application will listen on inside the container. With the exception of some tools and a flag in docker that uses this metadata documentation, it is not used to control access between containers or modify docker's networking. Applications may be reconfigured at runtime to listen to a different port, and you can connect to ports that have not been exposed.
For DNS lookups between containers, the network needs to be user created, not one of the default networks from docker (e.g. DNS is not enabled in the default bridge network named "bridge"). With DNS, you can lookup the container name, service name (from a compose file), and any network aliases created for that container on that network.
The other half of the equation is "publish" in docker. This creates a mapping from the host to the container to allow external access. It is implemented with a proxy process that runs on the host and forwards new connections. Because of the implementation, you can publish a port on the host even if the container is not listening on the port, though you would receive an error when trying to connect to that port in that scenario.

The lack of expose: ... just means that there is no port exposed from the service group you defined in your docker-compose.yml
Within the images you use, there are still exposed ports which are reachable from within the network that is automatically created by docker-compose.
That is why you reach one container from within another. In addition every container can be accessed via service name from the docker-compose.yml on the internal network.
You should not be able to access flask from your host (http://localhost:5000)

Related

can not connect with docker postgress sql container with psycopg2

I am running a Postgres SQL database container with following command:
docker run --name db -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -v pg
Of course I have changed the 'localhost' to 'db' since I am trying to connect with this container.
when I try to connect to the container database I get the following error:
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
I cant use here Docker compose in this context ( I know how to run it though ).
What else I need to add in my docker command so that I can connect from python ?
Of course I have changed the 'localhost' to 'db' since I am trying to connect with this container.
No, you don't, your dockerfile is exposing the port 5432 to the host machine as stated by the flag -p 5432:5432
So if, you are trying to connect to the docker from your host machine, yoi will use the host localhost
I think you are confusing between docker and docker networking when we have multiple docker trying to communicate with each other as is the case is with docker-compose.
In case of docker-compose, when you have multiple services running, they can communicate with each other using the docker containers name as the host. Similar if you have a network between docker containers, they can communicate with each other using the docker name as the host
So if it was docker-compose, with the docker running on one container, and your app in another, in that case you would replace localhost with db.
Hope that clarifies things
If your Python program is running on the Docker host, then you don't want to "of course" change localhost to db in your connection string, since (a) Docker doesn't change your host DNS settings (b) you're using -p to publish the service running on port 5432 to the host on port 5432.
You would only use the name db from another Docker container running in the same Docker network.

Docker image not running on host 8050

I am trying to teach myself how to deploy a dash application on AWS.
I have created a folder 'DashboardImage' on my mac that contains a Dockerfile, README.md, requirements.txt and an app folder that contains my python dash app 'dashboard.py'.
My Dockerfile looks like this:
I go into the DashboardImage folder and run
docker built -t conjoint_dashboard .
It built successfully and if I run docker images I can see the details of the image.
When I try
docker run conjoint_dashboard
The terminal tells me Dash is running on http://0.0.0.0:8050/ but it is not connecting.
I can't understand why.
Update it according to your port, e.g. if your application exposes port 8050 then:
docker run -p 8050:8050 conjoint_dashboard where -p = publish first one is the HOST port, and the second is the CONTAINER port.
Also you can update your dockerfile:
FROM: continuumio/minicoda3
...
EXPOSE 8050/tcp
...
The EXPOSE instruction doesn't actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.
To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
By default, EXPOSE assumes TCP. You can also specify UDP:
You need to expose the port, see: https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose
$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash
This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also specify udp and sctp ports. The Docker User Guide explains in detail how to manipulate ports in Docker.

"kafka.errors.NoBrokersAvailable: NoBrokersAvailable" problem [duplicate]

I setup a single node Kafka Docker container on my local machine like it is described in the Confluent documentation (steps 2-3).
In addition, I also exposed Zookeeper's port 2181 and Kafka's port 9092 so that I'll be able to connect to them from a client running on local machine:
$ docker run -d \
-p 2181:2181 \
--net=confluent \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=2181 \
confluentinc/cp-zookeeper:4.1.0
$ docker run -d \
--net=confluent \
--name=kafka \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
Problem: When I try to connect to Kafka from the host machine, the connection fails because it can't resolve address: kafka:9092.
Here is my Java code:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();
The exception:
java.io.IOException: Can't resolve address: kafka:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
... 7 common frames omitted
Question: How to connect to Kafka running in Docker? My code is running from host machine, not Docker.
Note: I know that I could theoretically play around with DNS setup and /etc/hosts but it is a workaround - it shouldn't be like that.
There is also similar question here, however it is based on ches/kafka image. I use confluentinc based image which is not the same.
Disclaimer
tl;dr - A simple port forward from the container to the host will not work, and no hosts files should be modified. What exact IP/hostname + port do you want to connect to? Make sure that value is set as advertised.listeners on the broker. Make sure that address and the servers listed as part of bootstrap.servers are actually resolvable (ping an IP/hostname, use netcat to check ports...)
To verify the ports are mapped correctly on the host, ensure that docker ps shows the kafka container is mapped from 0.0.0.0:<host_port> -> <advertised_listener_port>/tcp. The ports must match if trying to run a client from outside the Docker network.
The below answer uses confluentinc docker images to address the question that was asked, not wurstmeister/kafka. More specifically, the latter images are not well-maintained despite being the one of the most popular Kafka docker image.
The following sections try to aggregate all the details needed to use another image. For other, commonly used Kafka images, it's all the same Apache Kafka running in a container.
You're just dependent on how it is configured. And which variables make it so.
wurstmeister/kafka
Refer their README section on listener configuration, Also read their Connectivity wiki.
bitnami/kafka
If you want a small container, try these. The images are much smaller than the Confluent ones and are much more well maintained than wurstmeister. Refer their README for listener configuration.
debezium/kafka
Docs on it are mentioned here.
Note: advertised host and port settings are deprecated. Advertised listeners covers both. Similar to the Confluent containers, Debezium can use KAFKA_ prefixed broker settings to update its properties.
Others
spotify/kafka is deprecated and outdated.
fast-data-dev or lensesio/box are great for an all in one solution, but are bloated if you only want Kafka
Your own Dockerfile - Why? Is something incomplete with these others? Start with a pull request, not starting from scratch.
For supplemental reading, a fully-functional docker-compose, and network diagrams, see this blog by #rmoff
Answer
The Confluent quickstart (Docker) document assumes all produce and consume requests will be within the Docker network.
You could fix the problem of connecting to kafka:9092 by running your Kafka client code within its own container as that uses the Docker network bridge, but otherwise you'll need to add some more environment variables for exposing the container externally, while still having it work within the Docker network.
First add a protocol mapping of PLAINTEXT_HOST:PLAINTEXT that will map the listener protocol to a Kafka protocol
Key: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
Value: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
Then setup two advertised listeners on different ports. (kafka here refers to the docker container name; it might also be named broker, so double check your service + hostnames).
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
Notice the protocols here match the left-side values of the protocol mapping setting above
When running the container, add -p 29092:29092 for the host port mapping, and advertised PLAINTEXT_HOST listener.
tl;dr
(with the above settings)
If something still doesn't work, KAFKA_LISTENERS can be set to include <PROTOCOL>://0.0.0.0:<PORT> where both options match the advertised setting and Docker-forwarded port
Client on same machine, not in a container
Advertising localhost and the associated port will let you connect outside of the container, as you'd expect.
In other words, when running any Kafka Client outside the Docker network (including CLI tools you might have installed locally), use localhost:29092 for bootstrap servers and localhost:2181 for Zookeeper (requires Docker port forwarding)
Client on another machine
If trying to connect from an external server, you'll need to advertise the external hostname/ip (e.g. 192.168.x.y) of the host as well as/in place of localhost.
Simply advertising localhost with a port forward will not work because Kafka protocol will still continue to advertise the listeners you've configured.
This setup requires Docker port forwarding and router port forwarding (and firewall / security group changes) if not in the same local network, for example, your container is running in the cloud and you want to interact with it from your local machine.
Client (or another broker) in a container, on the same host
This is the least error-prone configuration; you can use DNS service names directly.
When running an app in the Docker network, use kafka:9092 (see advertised PLAINTEXT listener config above) for bootstrap servers and zookeeper:2181 for Zookeeper, just like any other Docker service communication (doesn't require any port forwarding)
If you use separate docker run commands, or Compose files, you need to define a shared network manually
See the example Compose file for the full Confluent stack or more minimal one for a single broker.
If using multiple brokers, then they need to use unique hostnames + advertised listeners. See example
Related question
Connect to Kafka on host from Docker (ksqlDB)
Appendix
For anyone interested in Kubernetes deployments:
Accessing Kafka
Operators (recommended): https://operatorhub.io/?keyword=Kafka
Helm Artifact Hub: https://artifacthub.io/packages/search?ts_query_web=kafka&sort=stars&page=1
When you first connect to a kafka node, it will give you back all the kafka node and the url where to connect. Then your application will try to connect to every kafka directly.
Issue is always what is the kafka will give you as url ? It's why there is the KAFKA_ADVERTISED_LISTENERS which will be used by kafka to tell the world how it can be accessed.
Now for your use-case, there is multiple small stuff to think about:
Let say you set plaintext://kafka:9092
This is OK if you have an application in your docker compose that use kafka. This application will get from kafka the URL with kafka that is resolvable through the docker network.
If you try to connect from your main system or from another container which is not in the same docker network this will fail, as the kafka name cannot be resolved.
==> To fix this, you need to have a specific DNS server like a service discovery one, but it is big trouble for small stuff. Or you set manually the kafka name to the container ip in each /etc/hosts
If you set plaintext://localhost:9092
This will be ok on your system if you have a port mapping ( -p 9092:9092 when launching kafka)
This will fail if you test from an application on a container (same docker network or not) (localhost is the container itself not the kafka one)
==> If you have this and wish to use a kafka client in another container, one way to fix this is to share the network for both container (same ip)
Last option : set an IP in the name: plaintext://x.y.z.a:9092 ( kafka advertised url cannot be 0.0.0.0 as stated in the doc https://kafka.apache.org/documentation/#brokerconfigs_advertised.listeners )
This will be ok for everybody... BUT how can you get the x.y.z.a name ?
The only way is to hardcode this ip when you launch the container: docker run .... --net confluent --ip 10.x.y.z .... Note that you need to adapt the ip to one valid ip in the confluent subnet.
before zookeeper
docker container run --name zookeeper -p 2181:2181 zookeeper
after kafka
docker container run --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka
in kafka consumer and producer config
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
I run my project with these regulations. Good luck dude.
the simplest way to solve this is adding a custom hostname to your broker using -h option
docker run -d \
--net=confluent \
--name=kafka \
-h broker-1 \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
and edit your /etc/hosts
127.0.0.1 broker-1
and use:
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9092");
This allows me to access localhost:9092 in Kafka applications on my M1 Mac
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
plus port forwarding :
ports
- "9092:9092"
Finally, again, for my set up, I have to set listeners key this way
Key: KAFKA_LISTENERS
Value: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092

Accessing Docker Container on Centos Server

I've managed to deploy a Django app inside a docker container on my personal Mac using localhost with Apache. For this, I use docker-compose with the build and up commands. I'm trying to run the same Django app on a CentOS server using a docker image generated on my local machine. Apache is also running on the server on port 90.
docker run -it -d --hostname xxx.xxx.xxx -p 9090:9090 --name test idOfImage
How can I access this container with Apache using the hostname and port number in the URL? Any help would be greatly appreciated. Thanks.
From other containers the best way to access this container is to attach both to the same network and use the container's --name as a DNS name and the internal port (the second port from the -p option, which isn't strictly required for this case); from outside a container or from other hosts use the host's IP address or DNS name and the published port (the first port from the -p option).
The docker run --hostname option isn't especially useful; the only time you'd want to specify it is if you have some magic licensed software that only ran if it had a particular hostname.
Avoid localhost in a Docker context, except for the very specific case where you know you're running a process on the host system outside a container and you're trying to access a container's published port or some other service running on the host. Don't use "localhost" as a generic term, it has a very specific context-dependent meaning (every process believes it's running "on localhost").

Connect my docker to external docker

I have a Redis container as a stand alone now I want to connect to this inside my container (another docker container). But I can't seem to successfully connect. Below is the list of docker
As you can see my container flexapi_api_1 will try to connect to localredis but I always get a connection timeout. When trying to do docker inspect localredis I get the result as shown below
I'm not sure if I need to use the ip 172.17.0.2 as the host ip or I'll use the 0.0.0.0 as host ip for the redis. Is there a way to connect my container to another external container?
You can connect from one container to another using the container name as long as the containers are connected to the same network.
Create a network and connect the container to it:
docker network create mynet
docker network connect mynet localredis
docker network connect mynet flexapi_api_1
Now flexapi_api_1 should be able to connect to redis via localredis

Categories

Resources