Docker Container: UDP Communication with other hosts - python

i am writing a python application that is sending continously UDP messages to a predefined network with other hosts and fixed IPs. I wrote the python application and dockerized it. The application works fine in the docker, no problems there.
Unfortunately i am failing to send the UDP messages from my docker to the host so they will be sent to the other hosts in the network. The same is for receiving messages. Right now i dont know how to set up my docker so it is receiving a UDP message from a host with fixed IP adress in the network.
I tried to set up my docker network with --net host and i sent all the UDP messages from my docker container via localhost to my host. This worked fine, too. I am missing the link where i can sent the messages no to the "outside world". I tried to make a picture of my problem.
My Question: How do i have to set up the network communcation for my docker/host so it can receive messages via UDP from other hosts in the network?
Thanks

So i experimented a lot and i figured out, that i just need to run the docker container with the network configuration as host. The UDP socket in my container is bound to the IP adress of my host and therefore just needs to be linked to the Network of the host. Everyone who is struggeling the same issue, just run
docker run --network=host <YOURCONTAINER>

Build your own bridge
1.Configure the new bridge.
$ sudo ip link set dev br0 up
$ sudo ip addr add 192.168.5.1/24 dev bridge0
$ sudo ip link set dev bridge0 up
Confirm the new bridge’s settings.
$ ip addr show bridge0
4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default
link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.1/24 scope global bridge0
valid_lft forever preferred_lft forever <br/>
2. Configure Docker to use the new bridge by setting the option in the daemon.json file, which is located in /etc/docker/ on Linux or C:\ProgramData\docker\config\ on Windows Server. On Docker for Mac or Docker for Windows, click the Docker icon, choose Preferences, and go to Daemon.
If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:
{
"bridge": "bridge0"
}
Restart Docker for the changes to take effect.
3. Confirm that the new outgoing NAT masquerade is set up.
$ sudo iptables -t nat -L -n
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 192.168.5.0/24 0.0.0.0/0
4.Remove the now-unused docker0 bridge.
$ sudo ip link set dev docker0 down
$ sudo ip link del name br0
$ sudo iptables -t nat -F POSTROUTING
5.Create a new container, and verify that it is in the new IP address range.
(ref.)

Related

Cannot access local django webserver

I cannot seem to figure out why I cannot access my local django webserver from an outside device. I suspect it has to do with port forwarding but none of my attempts to solve it seem to work.
I started by following the suggestions posted here How to access the local Django webserver from outside world by launching the server using python manage.py runserver 0.0.0.0:8000 and by entering <my-ip-address>:<port> in my external device' browser. This did not work, so I tried to explicitly make sure port forwarding was enabled by adding the following lines to iptables.
iptables -A INPUT -p tcp --dport 8000 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 8000 -m conntrack --ctstate ESTABLISHED -j ACCEPT
netfilter-persistent save
Still, I'm not able to access the local webserver. No error messages show up, the browser just tells me that the address took too long to respond. I've tried to do the above but this time with port 80 and using sudo when needed without avail. In addition I've tried to use this line ALLOWED_HOSTS = ['*'] as suggested by multiple users.
I've tried to investigate whether the application is really running on the indicated port using lsof -i which shows several PYTHON lines but I'm not sure what else I'm supposed to look for to see whether things are running correctly. Finally, I've disabled my firewall on my external device, which didn't help either.
Can anyone point me to a direction to find out what's wrong?
EDIT: to clarify, I can access the server perfectly fine from the same device where the local server is running.

Docker image not running on host 8050

I am trying to teach myself how to deploy a dash application on AWS.
I have created a folder 'DashboardImage' on my mac that contains a Dockerfile, README.md, requirements.txt and an app folder that contains my python dash app 'dashboard.py'.
My Dockerfile looks like this:
I go into the DashboardImage folder and run
docker built -t conjoint_dashboard .
It built successfully and if I run docker images I can see the details of the image.
When I try
docker run conjoint_dashboard
The terminal tells me Dash is running on http://0.0.0.0:8050/ but it is not connecting.
I can't understand why.
Update it according to your port, e.g. if your application exposes port 8050 then:
docker run -p 8050:8050 conjoint_dashboard where -p = publish first one is the HOST port, and the second is the CONTAINER port.
Also you can update your dockerfile:
FROM: continuumio/minicoda3
...
EXPOSE 8050/tcp
...
The EXPOSE instruction doesn't actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.
To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
By default, EXPOSE assumes TCP. You can also specify UDP:
You need to expose the port, see: https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose
$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash
This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also specify udp and sctp ports. The Docker User Guide explains in detail how to manipulate ports in Docker.

"kafka.errors.NoBrokersAvailable: NoBrokersAvailable" problem [duplicate]

I setup a single node Kafka Docker container on my local machine like it is described in the Confluent documentation (steps 2-3).
In addition, I also exposed Zookeeper's port 2181 and Kafka's port 9092 so that I'll be able to connect to them from a client running on local machine:
$ docker run -d \
-p 2181:2181 \
--net=confluent \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=2181 \
confluentinc/cp-zookeeper:4.1.0
$ docker run -d \
--net=confluent \
--name=kafka \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
Problem: When I try to connect to Kafka from the host machine, the connection fails because it can't resolve address: kafka:9092.
Here is my Java code:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();
The exception:
java.io.IOException: Can't resolve address: kafka:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
... 7 common frames omitted
Question: How to connect to Kafka running in Docker? My code is running from host machine, not Docker.
Note: I know that I could theoretically play around with DNS setup and /etc/hosts but it is a workaround - it shouldn't be like that.
There is also similar question here, however it is based on ches/kafka image. I use confluentinc based image which is not the same.
Disclaimer
tl;dr - A simple port forward from the container to the host will not work, and no hosts files should be modified. What exact IP/hostname + port do you want to connect to? Make sure that value is set as advertised.listeners on the broker. Make sure that address and the servers listed as part of bootstrap.servers are actually resolvable (ping an IP/hostname, use netcat to check ports...)
To verify the ports are mapped correctly on the host, ensure that docker ps shows the kafka container is mapped from 0.0.0.0:<host_port> -> <advertised_listener_port>/tcp. The ports must match if trying to run a client from outside the Docker network.
The below answer uses confluentinc docker images to address the question that was asked, not wurstmeister/kafka. More specifically, the latter images are not well-maintained despite being the one of the most popular Kafka docker image.
The following sections try to aggregate all the details needed to use another image. For other, commonly used Kafka images, it's all the same Apache Kafka running in a container.
You're just dependent on how it is configured. And which variables make it so.
wurstmeister/kafka
Refer their README section on listener configuration, Also read their Connectivity wiki.
bitnami/kafka
If you want a small container, try these. The images are much smaller than the Confluent ones and are much more well maintained than wurstmeister. Refer their README for listener configuration.
debezium/kafka
Docs on it are mentioned here.
Note: advertised host and port settings are deprecated. Advertised listeners covers both. Similar to the Confluent containers, Debezium can use KAFKA_ prefixed broker settings to update its properties.
Others
spotify/kafka is deprecated and outdated.
fast-data-dev or lensesio/box are great for an all in one solution, but are bloated if you only want Kafka
Your own Dockerfile - Why? Is something incomplete with these others? Start with a pull request, not starting from scratch.
For supplemental reading, a fully-functional docker-compose, and network diagrams, see this blog by #rmoff
Answer
The Confluent quickstart (Docker) document assumes all produce and consume requests will be within the Docker network.
You could fix the problem of connecting to kafka:9092 by running your Kafka client code within its own container as that uses the Docker network bridge, but otherwise you'll need to add some more environment variables for exposing the container externally, while still having it work within the Docker network.
First add a protocol mapping of PLAINTEXT_HOST:PLAINTEXT that will map the listener protocol to a Kafka protocol
Key: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
Value: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
Then setup two advertised listeners on different ports. (kafka here refers to the docker container name; it might also be named broker, so double check your service + hostnames).
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
Notice the protocols here match the left-side values of the protocol mapping setting above
When running the container, add -p 29092:29092 for the host port mapping, and advertised PLAINTEXT_HOST listener.
tl;dr
(with the above settings)
If something still doesn't work, KAFKA_LISTENERS can be set to include <PROTOCOL>://0.0.0.0:<PORT> where both options match the advertised setting and Docker-forwarded port
Client on same machine, not in a container
Advertising localhost and the associated port will let you connect outside of the container, as you'd expect.
In other words, when running any Kafka Client outside the Docker network (including CLI tools you might have installed locally), use localhost:29092 for bootstrap servers and localhost:2181 for Zookeeper (requires Docker port forwarding)
Client on another machine
If trying to connect from an external server, you'll need to advertise the external hostname/ip (e.g. 192.168.x.y) of the host as well as/in place of localhost.
Simply advertising localhost with a port forward will not work because Kafka protocol will still continue to advertise the listeners you've configured.
This setup requires Docker port forwarding and router port forwarding (and firewall / security group changes) if not in the same local network, for example, your container is running in the cloud and you want to interact with it from your local machine.
Client (or another broker) in a container, on the same host
This is the least error-prone configuration; you can use DNS service names directly.
When running an app in the Docker network, use kafka:9092 (see advertised PLAINTEXT listener config above) for bootstrap servers and zookeeper:2181 for Zookeeper, just like any other Docker service communication (doesn't require any port forwarding)
If you use separate docker run commands, or Compose files, you need to define a shared network manually
See the example Compose file for the full Confluent stack or more minimal one for a single broker.
If using multiple brokers, then they need to use unique hostnames + advertised listeners. See example
Related question
Connect to Kafka on host from Docker (ksqlDB)
Appendix
For anyone interested in Kubernetes deployments:
Accessing Kafka
Operators (recommended): https://operatorhub.io/?keyword=Kafka
Helm Artifact Hub: https://artifacthub.io/packages/search?ts_query_web=kafka&sort=stars&page=1
When you first connect to a kafka node, it will give you back all the kafka node and the url where to connect. Then your application will try to connect to every kafka directly.
Issue is always what is the kafka will give you as url ? It's why there is the KAFKA_ADVERTISED_LISTENERS which will be used by kafka to tell the world how it can be accessed.
Now for your use-case, there is multiple small stuff to think about:
Let say you set plaintext://kafka:9092
This is OK if you have an application in your docker compose that use kafka. This application will get from kafka the URL with kafka that is resolvable through the docker network.
If you try to connect from your main system or from another container which is not in the same docker network this will fail, as the kafka name cannot be resolved.
==> To fix this, you need to have a specific DNS server like a service discovery one, but it is big trouble for small stuff. Or you set manually the kafka name to the container ip in each /etc/hosts
If you set plaintext://localhost:9092
This will be ok on your system if you have a port mapping ( -p 9092:9092 when launching kafka)
This will fail if you test from an application on a container (same docker network or not) (localhost is the container itself not the kafka one)
==> If you have this and wish to use a kafka client in another container, one way to fix this is to share the network for both container (same ip)
Last option : set an IP in the name: plaintext://x.y.z.a:9092 ( kafka advertised url cannot be 0.0.0.0 as stated in the doc https://kafka.apache.org/documentation/#brokerconfigs_advertised.listeners )
This will be ok for everybody... BUT how can you get the x.y.z.a name ?
The only way is to hardcode this ip when you launch the container: docker run .... --net confluent --ip 10.x.y.z .... Note that you need to adapt the ip to one valid ip in the confluent subnet.
before zookeeper
docker container run --name zookeeper -p 2181:2181 zookeeper
after kafka
docker container run --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka
in kafka consumer and producer config
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
I run my project with these regulations. Good luck dude.
the simplest way to solve this is adding a custom hostname to your broker using -h option
docker run -d \
--net=confluent \
--name=kafka \
-h broker-1 \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
and edit your /etc/hosts
127.0.0.1 broker-1
and use:
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9092");
This allows me to access localhost:9092 in Kafka applications on my M1 Mac
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
plus port forwarding :
ports
- "9092:9092"
Finally, again, for my set up, I have to set listeners key this way
Key: KAFKA_LISTENERS
Value: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092

Accessing Docker Container on Centos Server

I've managed to deploy a Django app inside a docker container on my personal Mac using localhost with Apache. For this, I use docker-compose with the build and up commands. I'm trying to run the same Django app on a CentOS server using a docker image generated on my local machine. Apache is also running on the server on port 90.
docker run -it -d --hostname xxx.xxx.xxx -p 9090:9090 --name test idOfImage
How can I access this container with Apache using the hostname and port number in the URL? Any help would be greatly appreciated. Thanks.
From other containers the best way to access this container is to attach both to the same network and use the container's --name as a DNS name and the internal port (the second port from the -p option, which isn't strictly required for this case); from outside a container or from other hosts use the host's IP address or DNS name and the published port (the first port from the -p option).
The docker run --hostname option isn't especially useful; the only time you'd want to specify it is if you have some magic licensed software that only ran if it had a particular hostname.
Avoid localhost in a Docker context, except for the very specific case where you know you're running a process on the host system outside a container and you're trying to access a container's published port or some other service running on the host. Don't use "localhost" as a generic term, it has a very specific context-dependent meaning (every process believes it's running "on localhost").

web2py - allow external access - how?

I want to start a web2py server so that it can be accessed externally to the hosting server.
I've read this http://web2py.com/books/default/chapter/29/03
By default, web2py runs its web server on 127.0.0.1:8000 (port 8000 on
localhost), but you can run it on any available IP address and port.
You can query the IP address of your network interface by opening a
command line and typing ipconfig on Windows or ifconfig on OS X and
Linux. From now on we assume web2py is running on localhost
(127.0.0.1:8000). Use 0.0.0.0:80 to run web2py publicly on any of your
network interfaces.
but I can't find how to "Use 0.0.0.0:80" ? There doesn't seem to be a command line argument which would do that.
Thanks
EDIT: I should say the server in question does not have a GUI - I'm aware there's some sort GUI based admin facilties for web2py but that's out of the question here.
EDIT2: Just in case this is not clear (and on the offchance it makes any difference - which I doubt) I'm running the server like this :
sudo python web2py.py
not via wsgi/apache or the like.
python web2py.py --ip 0.0.0.0
just works fine but the log message will point you to an invalid address:
please visit:
http://0.0.0.0:8000
alternatively you can use ethernet interface ip but it will not listen also on localhost
What may help you is the fact that you can select the public ip when the server gui pops up asking for the admin password.
do the following in a terminal
install ufw with apt
add 8000 to firwall.
ufw allow 8000/tcp
ufw allow 8000/tcp
navigate to where your downloaded web2py is and cd web2py
use nano serverstartup.sh and add the line bellow
python2.7 web2py.py -a 'Server admin passwrod' -c server.crt -k server.key -i your device IP address -p 8000
change the the server admin password to any password of your choice.
chmod +x serverstartup.sh
run ./serverstartup.sh in your terminal
that is it. you can stop the server by holding control and c key on your keboard.

Categories

Resources