Unable to access locust on web-UI using docker - python

I have installed locust inside docker, i mapped docker port as well but when I run locust command I get below error, I am able to run locust on command line but not on Web-ui, may be i may miss-understood which host or port should use while accessing.
COMMAND:
locust -f locustfile.py
Error:
oserror errno 97 address family not supported by protocol
Command:
locust -f locustfile.py --web-host=localhost
Result:
[2019-12-18 11:24:47,101] ABZ-218/INFO/locust.main: Starting web
monitor at http://localhost:8089
[2019-12-18 11:24:47,102] ABZ-218/INFO/locust.main: Starting Locust 0.13.2
but not able to access it on browser.
I have mapped port 0.0.0.0:8089->80
so which command should i use while hitting locust and which command should i use while accessing it from chrome browser?

--web-host=localhost is not needed, by default locust will listen on all interfaces. Try removing it and seeing if that helps.

You can find the IP Address of where your application is running and pass that as an argument to --host e.g. --host http://127.0.0.1:8000.

Related

Trying to connect to a python socket inside a docker container from host

I have to implement to my Distributed systems class the Berkeley Algorithm and I chose to do it in python with sockets. The master is supposed to run in the host and the slaves in docker containers.
The closest I got from connecting from host (as a master) to the container (as the slave) was exposing the ports with the -p 9000:9000 flag the running the container, the host connects successfully to the container but doesn't receive or send anything (the same thing for the container) with that I have came to the conclusion that the python socket inside the process simply is not receiving packets from the port. I have already tried using -net=host flag but the host simply can't find the container. One progress that I had was to instantiate two docker containers and pinging one from another using the hostname provided in /etc/hosts but this is not what I really want.
I have the whole code in github if you need the source. The code is commented in English, but the documentation is in Portuguese
Summarising all I want to do is to open a socket with python inside a docker container and be able to reach in the host machine, what kind of network configuration do I need to do be able to do that?
EDIT: More info
The following bash script is used to instantiate three docker containers then execute a command into each one of them to clone my repo, cd into it and into a test folder containing a bash to execute a slave and then start the master at host:
docker run -it -d -p 127.0.0.1:9000:9000/tcp --name slave1 python bash
docker run -it -d -p 127.0.0.1:9001:9001/tcp --name slave2 python bash
docker run -it -d -p 127.0.0.1:9002:9002/tcp --name slave3 python bash
docker exec -t -d slave1 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_1.sh'
sleep 1
docker exec -t -d slave2 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_2.sh'
sleep 1
docker exec -t -d slave3 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_3.sh'
sleep 1
bash test/master.sh
To start each instance I use another bash command
To instantiate the slave I use:
python ../main.py -s 127.0.0.1:9000 175 logs/slave_log_1.txt
The -s is a flag to tell the main.py class that this is a slave, the 127.0.0.1:9000 are the ip and port that this slave is going to listen (and the master is going to connect) and the rest are just configurations (this example is used for the first slave).
And to instantiate the master I use:
python ./main.py -m 127.0.0.1:8080 185 15 test/slaves.txt test/logs/master_log.txt
Just like the slave the -m tells main that this is a master, 127.0.0.1:8080 are the ip and port that the master is going to connect to the slave and the rest are just configurations.
When you run a server-type process inside a Docker container, it needs to be configured to listen on the special "all interfaces" address 0.0.0.0. Each container has its own notion of localhost or 127.0.0.1, and if you set a process to listen or bind to 127.0.0.1, it can only be reached from its own localhost which is different from all other containers' localhost and the host's localhost.
In the server command you show, you'd run something like
python ../main.py -s 0.0.0.0:9000 175 logs/slave_log_1.txt
(Stringly consider building a Dockerfile to describe how to build and start your image. Starting a bunch of empty containers, git clone into each, and then manually launching processes is a lot of manual work to be lost as soon as you docker rm the container.)
I looked through your code and I see you creating the server socket and binding it to a port and listening, but I could not find where you call socket.accept() method ?

Running HTML file in localhost:8080

I want to run an HTML file in localhost:8080, I'm using the command:
python3 -m http.server
Problem is when I try to open localhost:8080 it downloads the HTML file instead of displaying it.
Problem is when I try to open localhost:8080 it downloads the HTML file instead of displaying it.
You want to open http://localhost:8000 instead.
When you use the command you mentioned, python3 -m http.server, it defaults to port 8000, as explained in its startup output:
$ python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
We don't know what different server you have running on port 8080, but apparently it doesn't put Content-type: text/html in its output headers.
viewing webserver http headers
It's easy to view those headers, e.g. with wget use the -S switch.
You need to add an option that'll put your website on port 8080 because the http.server command defaults to port 8000.
You can do this using:
python3 -m http.server 8080
Then when you go to 0.0.0.0:8080 it should show you your webpage instead of a download prompt.
Also, you might have another instance of http.server running on port 8080.
You can find the PID of this task using:
ps -A | grep "python3"
Which should show something that looks like this:
Then you could kill it using:
kill <PID-FOR-PYTHON3-INSTANCE>
Or in my case the task that's running on port 8080 is:
kill 6856
Or, if you don't mind, just kill all Python3 tasks using:
killall python3
Which in my case would kill both Python3 tasks.
WARNING: be very, VERY careful before running the killall command, because this command will NOT save your work.
UPDATE: that blurry section in the picture is my username, I wasn't sure if it would be against the rules to include it.
Good luck.

Cannot connect to a gRPC service running in local Docker container

I did read the answer to [this similar question][1] but it didn't help to resolve the issue for me.
My setup:
a remote gRPC service;
a .py client that runs directly on the host.
In that configuration everything works fine. However, if I start that remote gRPC service in local Docker container (.py client still runs locally):
08:49:00.434005 7 server.cc:53] Server starting
08:49:00.435603 7 server.cc:59] Server listening on 0.0.0.0:9000
The command I use to run a gRPC service: sudo docker run --rm -it -u
dud --net=host --entrypoint=/usr/local/bin/application COOL_APP
Here's a snippet of code of my .py client:
HOST = 'localhost'
PORT = '9000'
with grpc.insecure_channel('{}:{}'.format(HOST, PORT)) as channel:
I receive the following error (AFAIK it means my .py client couldn't connect to the host:port of my Docker service):
Traceback (most recent call last):
File "client.py", line 31, in <module>
for record in reader:
File "/usr/local/lib/python2.7/site-packages/grpc/_channel.py", line 367, in next
return self._next()
File "/usr/local/lib/python2.7/site-packages/grpc/_channel.py", line 358, in _next
raise self
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Connect Failed"
debug_error_string = "{"created":"#1550567018.554830000","description":"Failed to create subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":2261,"referenced_errors":[{"created":"#1550567018.554828000","description":"Pick Cancelled","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":245,"referenced_errors":[{"created":"#1550567018.554798000","description":"Connect Failed","file":"src/core/ext/filters/client_channel/subchannel.cc","file_line":867,"grpc_status":14,"referenced_errors":[{"created":"#1550567018.554789000","description":"Failed to connect to remote host: OS Error","errno":61,"file":"src/core/lib/iomgr/tcp_client_posix.cc","file_line":207,"os_error":"Connection refused","syscall":"connect","target_address":"ipv6:[::1]:9000"}]}]}]}"
I've tried setting both localhost:9000, 0.0.0.0:9000, and :9000 in my .py client and it didn't work.
I'm not sure whether it makes sense, but when I run:
user-dev:~ user$ lsof -i tcp:8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
myservice 53941 user 6u IPv4 0x40c50fcf1d04701d 0t0 TCP localhost:http-alt (LISTEN)
user-dev:~ user$ lsof -i tcp:9000
I.e., my terminal doesn't show anything for tcp:9000 (I run the command above to check whether something actually listens to localhost:9000).
Update: when I run a hello-world [container][2] with -p 9000:9000 I receive a different error:
debug_error_string = "{"created":"#1550580085.678869000","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1036,"grpc_message":"Socket closed","grpc_status":14}"
``` so I assume something is wrong with my gRPC service image / Docker flags.
[1]: https://stackoverflow.com/questions/43911793/cannot-connect-to-go-grpc-server-running-in-local-docker-container
[2]: https://github.com/crccheck/docker-hello-world
You need to tell docker to expose the port externally.
Try adding -p 9000:9000 to your docker run command.
The right command was:
docker run -p 9000:9000 -it -u dud --entrypoint=/usr/local/bin/application COOL_APP
Basically used -p 9000:9000 and removed --net=host.
Your host ip is not localhost but it is docker's ip address.
You can find docker ip by "docker network inspect bridge | grep IPv4Address"

Flask Running Issue: socket.error: [Errno 98] Address already in use

I got this error message sublime issue(My OS: Ubuntu 16.04) "socket.error: [Errno 98] Address already in use" If I run flask in sublime text or PyCharm. But if I run flask on my Ubuntu terminal,it is running. I understood that port used another service. Then i was trying to solve this issue from google/stackoverflow.
# ps ax | grep 5000 // or # ps ax | grep name_of_service
# kill 3750 // or # killall name_of_service
But nothing changed. Only i found this problem when i was trying to run on sublime or pycharm IDE.
Simple way is to use fuser.
fuser <yourport>/tcp #this will fetch the process/service
Replace <yourport> with the port you want to use
#to kill the process using <yourport> add `-k` argument
fuser <yourport>/tcp -k
In your case
fuser 5000/tcp -k
Now you can run flask with that port.
Pycharm allows you to edit the run configuration, so enter the configuration and check the box (top-right corner) saying: "singleton instance". In this way, every time you restart the server, the previous connection on port 5000 is closed and opened again.

How to run python bottle on port 80?

When trying to run python bottle on port 80 I get the following:
socket.error: [Errno 10013] An attempt was made to access a socket in a way forb
idden by its access permissions
My goal is to run the web server on port 80 so the url's will be nice and tidy without any need to specify the port
for example:
http://localhost/doSomething
instead of
http://localhost:8080/doSomething
Any ideas?
Thanks
Exactly as the error says. You need to have permissions to run something on the 80th port, normal user cannot do it. You can execute the bottle webapp as root (or maybe www-data) and it should be fine as long as the port is free.
But taking security (and stability) into consideration you should look at different ways of deployment, for example nginx together with gunicorn.
Gunicorn Docs
Nginx Docs
Check your system's firewall setting.
Check whether another application already use port 80 using following commands:
On unix: netstat -an | grep :80
On Windows: netstat -an | findstr :80
According to Windows Sockets Error Codes:
WSAEACCES 10013
Permission denied.
An attempt was made to access a socket in a way forbidden by its
access permissions. An example is using a broadcast address for sendto
without broadcast permission being set using setsockopt(SO_BROADCAST).
Another possible reason for the WSAEACCES error is that when the bind
function is called (on Windows NT 4.0 with SP4 and later), another
application, service, or kernel mode driver is bound to the same
address with exclusive access. Such exclusive access is a new feature
of Windows NT 4.0 with SP4 and later, and is implemented by using the
SO_EXCLUSIVEADDRUSE option.
Sometimes dont need to want install nginx, python with gunicorn is a viable alternative with supervisor but you need to make lot of tricks for working
i assume you know install supervisor, and later install again requirements
pip3 install virtualenv
mkdir /home/user/.envpython
virtualenv /home/user/.envpython/bin/activate
source /home/user/.envpython/bin/activate
cd /home/user/python-path/
pip3 install -r requirements
create a supervisor file like that
nano /etc/supervisord.d/python-file.conf
and edit with this example, edit the program that you need, remember the python3 run in other ports > 1024
;example with path python supervisor in background
[program:python]
environment=HOME="/home/user",USER="user"
user=user
directory = /home/user/python-path
command = python3 /home/user/python-path/main.py
priority = 900
autostart = true
autorestart = true
stopsignal = TERM
;redirect_stderr = true
stdout_logfile = /home/user/.main-python-stdout.log
stderr_logfile = /home/user/.main-python-stderr.log
;example with path python gunicorn supervisor and port 80
[program:gunicorn]
;environment=HOME="/home/user",USER="user"
;user=user
directory = /home/user/python-path
command = bash /home/user/.scripts/supervisor-initiate.sh
priority = 900
autostart = true
autorestart = true
stopsignal = TERM
;redirect_stderr = true
stdout_logfile = /home/user/.main-python-stdout.log
stderr_logfile = /home/user/.main-python-stderr.log
and create the file
nano /home/user/.scripts/supervisor-initiate.sh
with the following content
source /home/user/.envpython/bin/activate
cd /home/user/python-path
gunicorn -w 1 -t 120 -b 0.0.0.0:80 main:app
i assume you file in python is called main and you initiate the app with flask or django that called "app"
only restart supervisord process
systemctl restart supervisord
and you have the app with gunicorn in the port 80, i post because i find for a very long time for this solution
Waiting works for anyone

Categories

Resources