Docker Port Mapping Doesn't render page in browser - python

I'm trying to containerize a nodejs app into a container image.
So in the docker documents we see the correct way to map the host port to the container port is hostport:containerport
Using the same, I've tried to run the command:
docker run -i -t -p 3007:8080
Where 3007 is the port my nodejs app is listening to and 8080 is the hostport(my laptop).
But I keep getting the error "localhost refused to connect" when I hit localhost:8080 in my browser.
It wasn't until I swapped these port numbers like:
docker run -i -t -p 8080:3007
To render the actual app (listening on port 3007) in my browser.
I'm confused as to why this happens? Am I missing any information?

Related

microservices: client service and server service (fastAPI) running as docker

I need to build a small program with microservice architecture:
server service (Python fast API framework)
I run it with Dockerfile command:
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
client service: simple Python CLI textual requires input as username from input CLI and to connect server GET/POST HTTP requests
unsername= input("Please insert your unsername:")
log.info(f"{unsername}")
I run it with Dockerfile command:
CMD ["python", "./main.py"]
I am not sure how to run my client with the docker to run the main but with no existing.
when I am running with venv from 2 different terminals the client and the server all work as expected and succeed to connect (because both of them are on my machine)
with docker.
I got an error related to the username I try to input
EOFError: EOF when reading a line
even if delete the input I still got an error conn = connection.create_connection...Failed to establish a new connection like the client failed to connect my server when it on isolated container.
for enable input from terminal need to adding to docker-compose :
tty: true # docker run -t
stdin_open: true # docker run -i

access jupyter running on Docker container installed on unix server

It's my first week working on dockers, i have arrived on my new work, found running container on server, on this container
i have installed jupyter.
ps: i don't know on which port my container is running, there is no expose in the code.
I'm unable to access the jupyter from my localhost.
I did:
#Acess server
ssh data_team#word-server-prod
# on server
docker ps
f04ccccccc7 registry.gitlab.com/world/jupyter_project:9ad9XXXXXXXXXXXXXXXXXXXXXXXXXXX8 "/bin/bash -c '/usr/" 12 weeks ago Up 12 weeks jupyter_project
#
# access to container
docker container exec -it f04ccccccc7 bash
#inside container
jupyter notebook --ip 0.0.0.0 --no-browser --allow-root
**i got**
http://127.0.0.1:8889/?token=16125997dXXXXXXXXXXXXXXXXXXXXXXXXXX24cd5d99
**from server i did**
curl http://127.0.0.1:8889/?token=16125997dXXXXXXXXXXXXXXXXXXXXXXXXXX24cd5d99
and i got curl: (7) Failed to connect to 127.0.0.1 port 8889: refused connexion

Why cannot access a running Docker container in browser?

Here is my dockerfile:
FROM python:3.8
WORKDIR /locust
RUN pip3 install locust
COPY ./ /locust/
EXPOSE 8089
CMD ["locust", "-f", "locustfile.py"]
Here is the response:
Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
Starting Locust 1.2.3
But when I try to access it in the browser - it doesn't load. I feel like I might be missing something simple, but cannot find it.
This statement,
EXPOSE 8089
will only expose your port for inter-container communication, but not to the host.
For allowing host to communicate on the container port you will need to bind the port of host and container in the docker run command as follows
docker run -p <HOST_PORT>:<CONTAINER:PORT> IMAGE_NAME
which in your case will be
docker run -p 8089:8089 IMAGE_NAME

GUnicorn not showing up locally, not using any ports (Django) [duplicate]

I'm running gunicorn inside a docker container. I know this works because sshing into it and curling localhost:8000/things in docker container gives me the response I want, however, I am not able to reach this on my host, despite docker telling me the port has been mapped. What gives?
I ran
docker run -d -p 80:8000 myapp:version1.1 /bin/bash -c 'gunicorn things:app'
docker ps gives me
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
614df1f2708e myapp:version1.1 "/bin/bash -c 'gunico" 6 minutes ago Up 6 minutes 5000/tcp, 0.0.0.0:80->8000/tcp evil_stallman
On my host, curling locahost/things gives me
curl: (52) Empty reply from server
However, when I docker exec -t -i 614df1f2708e /bin/bash and then curl localhost:8000/things, I succesfully get my correct response.
Why isn't docker mapping my port 8000 correctly?
When you publish a port, Docker will forward requests into the container, but the container needs to be listening for them. The container has an IP address from the Docker network, and your app needs to be listening on that address.
Check your gunicorn bind setting - if it's only listening on 127.0.0.1:8000 then it's not binding to the container's IP address, and won't get requests from outside. 0.0.0.0:8000 is safe as it will bind to all addresses.

Docker container with Python Web App - Connect to Postgres on Host machine (OSX)

I can connect to my local Postgres DB in my web app, but NOT if I am running the web app inside of a Docker container.
Web app running inside of a Docker container
Postgres running in the Host machine
I am not sure if it is related to the Postgres connection settings or to the Docker network settings.
Follow my settings and commands:
Host:
OSX 10.11.6
PostgreSQL 9.6
Docker container
Docker 1.13.1
Docker-machine 0.9.0
Docker container OS: python:3.6.0-alpine
Python 3.6 + psycopg2==2.7
postgresql.conf:
listen_addresses = '*'
pg_hba.conf
host all all 127.0.0.1/32 trust
host all all 0.0.0.0/0 trust
host all all 172.17.0.0/24 trust
host all all 192.168.99.0/24 trust
With Docker network in HOST mode
docker run -i --net=host -h 127.0.0.1 -e POSTGRES_URI=postgresql://127.0.0.1:5432/db my/image
Error:
could not connect to server: Connection refused
Is the server running
on host "127.0.0.1" and accepting TCP/IP connections on port 5432?
With Docker network in BRIDGE mode
docker run -i --add-host=dockerhost:`docker-machine ip ${DOCKER_MACHINE}` -e POSTGRES_URI=postgresql://dockerhost:5432/db -p 8000:8000 -p 5432:5432 my/image
Error:
server closed the connection unexpectedly
This probably means the
server terminated abnormally before or while processing the request.
Any ideas?
There is a note about doing this in the docs
I want to connect from a container to a service on the host
The Mac has a changing IP address (or none if you have no network access). Our current recommendation is to attach an unused IP to the lo0 interface on the Mac; for example: sudo ifconfig lo0 alias 10.200.10.1/24, and make sure that your service is listening on this address or 0.0.0.0 (ie not 127.0.0.1). Then containers can connect to this address.

Categories

Resources