Cannot connect to a gRPC service running in local Docker container - python

I did read the answer to [this similar question][1] but it didn't help to resolve the issue for me.
My setup:
a remote gRPC service;
a .py client that runs directly on the host.
In that configuration everything works fine. However, if I start that remote gRPC service in local Docker container (.py client still runs locally):
08:49:00.434005 7 server.cc:53] Server starting
08:49:00.435603 7 server.cc:59] Server listening on 0.0.0.0:9000
The command I use to run a gRPC service: sudo docker run --rm -it -u
dud --net=host --entrypoint=/usr/local/bin/application COOL_APP
Here's a snippet of code of my .py client:
HOST = 'localhost'
PORT = '9000'
with grpc.insecure_channel('{}:{}'.format(HOST, PORT)) as channel:
I receive the following error (AFAIK it means my .py client couldn't connect to the host:port of my Docker service):
Traceback (most recent call last):
File "client.py", line 31, in <module>
for record in reader:
File "/usr/local/lib/python2.7/site-packages/grpc/_channel.py", line 367, in next
return self._next()
File "/usr/local/lib/python2.7/site-packages/grpc/_channel.py", line 358, in _next
raise self
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Connect Failed"
debug_error_string = "{"created":"#1550567018.554830000","description":"Failed to create subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":2261,"referenced_errors":[{"created":"#1550567018.554828000","description":"Pick Cancelled","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":245,"referenced_errors":[{"created":"#1550567018.554798000","description":"Connect Failed","file":"src/core/ext/filters/client_channel/subchannel.cc","file_line":867,"grpc_status":14,"referenced_errors":[{"created":"#1550567018.554789000","description":"Failed to connect to remote host: OS Error","errno":61,"file":"src/core/lib/iomgr/tcp_client_posix.cc","file_line":207,"os_error":"Connection refused","syscall":"connect","target_address":"ipv6:[::1]:9000"}]}]}]}"
I've tried setting both localhost:9000, 0.0.0.0:9000, and :9000 in my .py client and it didn't work.
I'm not sure whether it makes sense, but when I run:
user-dev:~ user$ lsof -i tcp:8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
myservice 53941 user 6u IPv4 0x40c50fcf1d04701d 0t0 TCP localhost:http-alt (LISTEN)
user-dev:~ user$ lsof -i tcp:9000
I.e., my terminal doesn't show anything for tcp:9000 (I run the command above to check whether something actually listens to localhost:9000).
Update: when I run a hello-world [container][2] with -p 9000:9000 I receive a different error:
debug_error_string = "{"created":"#1550580085.678869000","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1036,"grpc_message":"Socket closed","grpc_status":14}"
``` so I assume something is wrong with my gRPC service image / Docker flags.
[1]: https://stackoverflow.com/questions/43911793/cannot-connect-to-go-grpc-server-running-in-local-docker-container
[2]: https://github.com/crccheck/docker-hello-world

You need to tell docker to expose the port externally.
Try adding -p 9000:9000 to your docker run command.

The right command was:
docker run -p 9000:9000 -it -u dud --entrypoint=/usr/local/bin/application COOL_APP
Basically used -p 9000:9000 and removed --net=host.

Your host ip is not localhost but it is docker's ip address.
You can find docker ip by "docker network inspect bridge | grep IPv4Address"

Related

Unable to access locust on web-UI using docker

I have installed locust inside docker, i mapped docker port as well but when I run locust command I get below error, I am able to run locust on command line but not on Web-ui, may be i may miss-understood which host or port should use while accessing.
COMMAND:
locust -f locustfile.py
Error:
oserror errno 97 address family not supported by protocol
Command:
locust -f locustfile.py --web-host=localhost
Result:
[2019-12-18 11:24:47,101] ABZ-218/INFO/locust.main: Starting web
monitor at http://localhost:8089
[2019-12-18 11:24:47,102] ABZ-218/INFO/locust.main: Starting Locust 0.13.2
but not able to access it on browser.
I have mapped port 0.0.0.0:8089->80
so which command should i use while hitting locust and which command should i use while accessing it from chrome browser?
--web-host=localhost is not needed, by default locust will listen on all interfaces. Try removing it and seeing if that helps.
You can find the IP Address of where your application is running and pass that as an argument to --host e.g. --host http://127.0.0.1:8000.

Trying to connect to a python socket inside a docker container from host

I have to implement to my Distributed systems class the Berkeley Algorithm and I chose to do it in python with sockets. The master is supposed to run in the host and the slaves in docker containers.
The closest I got from connecting from host (as a master) to the container (as the slave) was exposing the ports with the -p 9000:9000 flag the running the container, the host connects successfully to the container but doesn't receive or send anything (the same thing for the container) with that I have came to the conclusion that the python socket inside the process simply is not receiving packets from the port. I have already tried using -net=host flag but the host simply can't find the container. One progress that I had was to instantiate two docker containers and pinging one from another using the hostname provided in /etc/hosts but this is not what I really want.
I have the whole code in github if you need the source. The code is commented in English, but the documentation is in Portuguese
Summarising all I want to do is to open a socket with python inside a docker container and be able to reach in the host machine, what kind of network configuration do I need to do be able to do that?
EDIT: More info
The following bash script is used to instantiate three docker containers then execute a command into each one of them to clone my repo, cd into it and into a test folder containing a bash to execute a slave and then start the master at host:
docker run -it -d -p 127.0.0.1:9000:9000/tcp --name slave1 python bash
docker run -it -d -p 127.0.0.1:9001:9001/tcp --name slave2 python bash
docker run -it -d -p 127.0.0.1:9002:9002/tcp --name slave3 python bash
docker exec -t -d slave1 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_1.sh'
sleep 1
docker exec -t -d slave2 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_2.sh'
sleep 1
docker exec -t -d slave3 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_3.sh'
sleep 1
bash test/master.sh
To start each instance I use another bash command
To instantiate the slave I use:
python ../main.py -s 127.0.0.1:9000 175 logs/slave_log_1.txt
The -s is a flag to tell the main.py class that this is a slave, the 127.0.0.1:9000 are the ip and port that this slave is going to listen (and the master is going to connect) and the rest are just configurations (this example is used for the first slave).
And to instantiate the master I use:
python ./main.py -m 127.0.0.1:8080 185 15 test/slaves.txt test/logs/master_log.txt
Just like the slave the -m tells main that this is a master, 127.0.0.1:8080 are the ip and port that the master is going to connect to the slave and the rest are just configurations.
When you run a server-type process inside a Docker container, it needs to be configured to listen on the special "all interfaces" address 0.0.0.0. Each container has its own notion of localhost or 127.0.0.1, and if you set a process to listen or bind to 127.0.0.1, it can only be reached from its own localhost which is different from all other containers' localhost and the host's localhost.
In the server command you show, you'd run something like
python ../main.py -s 0.0.0.0:9000 175 logs/slave_log_1.txt
(Stringly consider building a Dockerfile to describe how to build and start your image. Starting a bunch of empty containers, git clone into each, and then manually launching processes is a lot of manual work to be lost as soon as you docker rm the container.)
I looked through your code and I see you creating the server socket and binding it to a port and listening, but I could not find where you call socket.accept() method ?

airflow webserver started but UI doesn't show in browser

I'm trying to use airflow .
I want to take a local test of the dags I wrote. I'm on windows so I decided to install ubuntu WLS following this bief tutorial https://coding-stream-of-consciousness.com/2018/11/06/apache-airflow-windows-10-install-ubuntu/.
Everything seems fine.
I started my db with airflow initdb.
Then I run airflow webserver -p 8080 and it seems running. When I go to http://0.0.0.0:8080/ I can't see any user interface. If I try to run again the airflow webserver I got
Error: Already running on PID 6244 (or pid file '/home/marcofumagalli/airflow/airflow-webserver.pid' is stale)
so i suppose that webserver is running.
Is it something related to proxy?
Error: Already running on PID 6244 (or pid file '/home/marcofumagalli/airflow/airflow-webserver.pid' is stale)
This means that the port 8080 is busy.
Try running the commands below:-
sudo lsof -i tcp:8080:- will show are the processes running as
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Python 945 amanraheja 6u IPv4 0xb7fcab59337d7455 0t0 TCP *:http-alt (LISTEN)
Python 1009 amanraheja 6u IPv4 0xb7fcab59337d7455 0t0 TCP *:http-alt (LISTEN)
Python 1052 amanraheja 6u IPv4 0xb7fcab59337d7455 0t0 TCP *:http-alt (LISTEN)
Python 1076 amanraheja 6u IPv4 0xb7fcab59337d7455 0t0 TCP *:http-alt (LISTEN)
Python 96034 amanraheja 6u IPv4 0xb7fcab59337d7455 0t0 TCP *:http-alt (LISTEN)
Kill the PID's by kill -9 945 and so on..
delete the airflow-webserver.pid file and start the server again and you will see that it is running fine.
mac docker memory issue
I hit this error while developing in my mac, and looking at the logs I found an exited with code: 137, which is an OOM code (although perhaps not always).
In any case, I fixed it by increasing the memory limit to 3GB in docker, following these docs:
hey actually same thing happened with me.
i am sharing following steps which i did.
kill the running pid by using kill pid command.
then confirm that you have initiated db with airflow db init command.
and created user by using airflow users create command.
after doing this step by step.
use localhost:8080 in place 0.0.0.0:8080.
it worked for me.
By running kill -9 PID resolved the issue
If you're running Airflow in a Docker, then killing the PID won't help, neither restarting the service. What you need to do is to search for the Docker container of Airflow's webserver and remove it like this:
docker ps
CONTAINER ID IMAGE ... PORTS NAMES
25d9df23d557 apache/airflow:2.3.2 ... 8080/tcp airflow-webserver
docker container rm {airflow-webserver}
Note for macOS users who runs Airflow via k8s:
In order to be able to see UI up and running on your browser at http://localhost:8080/, you need also to run port forwarding: kubectl port-forward POD_ID 8080:8080

How to run two sudo commands subsequently in python paramiko -SSH client linux?

I am trying to do a ssh to my local machine - 127.0.0.1, which works fine.
Next, I am trying to run two commands through ssh client. However, I see that the next command fails. I could see that my tap device is created. However, the tap device is not turned up. Here is my code. I tried the ifconfig and it works fine.
However, it is the sudo commands that is creating a problem.
self.serverName is 127.0.0.1
def configure_tap_iface(self):
ssh = paramiko.SSHClient()
print('SSH on to PC')
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(self.serverName, username='zebra', password='Zebra#2018')
stdin, stdout, stderr = ssh.exec_command('ifconfig')
#print(stdout.read())
session = ssh.get_transport().open_session()
session.get_pty()
session.exec_command('sudo ip address add 192.168.0.1/24 dev cloud_tap && sudo ip link set cloud_tap up')
session.close()
time.sleep(3)
ssh.close()
You can use sudo sh -c 'commands' to run multiple shell commands in a single sudo invocation.
session.exec_command("sudo sh -c 'ip address add 192.168.0.1/24 dev cloud_tap && ip link set cloud_tap up'")

How to initiate SSH connection from within a Fabric run command

I have a remote server, say 1.2.3.4which is running a docker container that is serving SSHD mapped to port 49222 on the docker host, so to connect to it manually I would do:
workstation$ ssh 1.2.3.4 -t "ssh root#localhost -p 49222" and arrive at the docker container SSH command prompt root#f383b4f71eeb:~#
If I run a fabric command which triggers run('ssh root#localhost -p 49222') then I instead am asked for the root password. However it does not accept the root password which I know to be correct, so I suspect the password prompt is originating from the host and not the docker container.
I defined the following task in my fabfile.py:
#task
def ssh():
env.forward_agent = True
run('ssh root#localhost -p 49222')
with settings(output_prefix=False, forward_agent=True):
run('ssh root#localhost -p 49222')
And in the remote servers sshd_config I needed to set:
AllowAgentForwarding yes
In addition, the output_prefix=False is useful to remove the [hostname] run: prefix that fabric adds to the start of every line, which is fairly annoying for every line of a remote shell.

Categories

Resources