It's my first week working on dockers, i have arrived on my new work, found running container on server, on this container
i have installed jupyter.
ps: i don't know on which port my container is running, there is no expose in the code.
I'm unable to access the jupyter from my localhost.
I did:
#Acess server
ssh data_team#word-server-prod
# on server
docker ps
f04ccccccc7 registry.gitlab.com/world/jupyter_project:9ad9XXXXXXXXXXXXXXXXXXXXXXXXXXX8 "/bin/bash -c '/usr/" 12 weeks ago Up 12 weeks jupyter_project
#
# access to container
docker container exec -it f04ccccccc7 bash
#inside container
jupyter notebook --ip 0.0.0.0 --no-browser --allow-root
**i got**
http://127.0.0.1:8889/?token=16125997dXXXXXXXXXXXXXXXXXXXXXXXXXX24cd5d99
**from server i did**
curl http://127.0.0.1:8889/?token=16125997dXXXXXXXXXXXXXXXXXXXXXXXXXX24cd5d99
and i got curl: (7) Failed to connect to 127.0.0.1 port 8889: refused connexion
Related
I'm trying to containerize a nodejs app into a container image.
So in the docker documents we see the correct way to map the host port to the container port is hostport:containerport
Using the same, I've tried to run the command:
docker run -i -t -p 3007:8080
Where 3007 is the port my nodejs app is listening to and 8080 is the hostport(my laptop).
But I keep getting the error "localhost refused to connect" when I hit localhost:8080 in my browser.
It wasn't until I swapped these port numbers like:
docker run -i -t -p 8080:3007
To render the actual app (listening on port 3007) in my browser.
I'm confused as to why this happens? Am I missing any information?
Here is my dockerfile:
FROM python:3.8
WORKDIR /locust
RUN pip3 install locust
COPY ./ /locust/
EXPOSE 8089
CMD ["locust", "-f", "locustfile.py"]
Here is the response:
Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
Starting Locust 1.2.3
But when I try to access it in the browser - it doesn't load. I feel like I might be missing something simple, but cannot find it.
This statement,
EXPOSE 8089
will only expose your port for inter-container communication, but not to the host.
For allowing host to communicate on the container port you will need to bind the port of host and container in the docker run command as follows
docker run -p <HOST_PORT>:<CONTAINER:PORT> IMAGE_NAME
which in your case will be
docker run -p 8089:8089 IMAGE_NAME
I'm running gunicorn inside a docker container. I know this works because sshing into it and curling localhost:8000/things in docker container gives me the response I want, however, I am not able to reach this on my host, despite docker telling me the port has been mapped. What gives?
I ran
docker run -d -p 80:8000 myapp:version1.1 /bin/bash -c 'gunicorn things:app'
docker ps gives me
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
614df1f2708e myapp:version1.1 "/bin/bash -c 'gunico" 6 minutes ago Up 6 minutes 5000/tcp, 0.0.0.0:80->8000/tcp evil_stallman
On my host, curling locahost/things gives me
curl: (52) Empty reply from server
However, when I docker exec -t -i 614df1f2708e /bin/bash and then curl localhost:8000/things, I succesfully get my correct response.
Why isn't docker mapping my port 8000 correctly?
When you publish a port, Docker will forward requests into the container, but the container needs to be listening for them. The container has an IP address from the Docker network, and your app needs to be listening on that address.
Check your gunicorn bind setting - if it's only listening on 127.0.0.1:8000 then it's not binding to the container's IP address, and won't get requests from outside. 0.0.0.0:8000 is safe as it will bind to all addresses.
I have some problems running Django on an ECS task.
I want to have a Django webapp running on an ECS task and accessible to the world.
Here are the symptoms:
When I run an ECS task using Django python manage.py runserver 0.0.0.0:8000 as entry point for my container, I have a connection refused response.
When I run the task using Gunicorn using gunicorn --bind 0.0.0.0:8000 my-project.wsgi I have no data response.
I don't see logs on CloudWatch and I can't find any server's logs when I ssh to the ECS instance.
Here are some of my settings related to that kind of issue:
I have set my ECS instance security groups inbound to All TCP | TCP | 0 - 65535 | 0.0.0.0/0 to be sure it's not a firewall problem. And I can assert that because I can run a ruby on rails server on the same ECS instance perfectly.
In my container task definition I set a port mapping to 80:8000 and an other to 8000:8000.
In my settings.py, I have set ALLOWED_HOSTS = ["*"] and DEBUG = False.
Locally my server run perfectly on the same docker image when doing a docker run -it -p 8000:8000 my-image gunicorn --bind=0.0.0.0:8000 wsgi or same with manage.py runserver.
Here is my docker file for a Gunicorn web server.
FROM python:3.6
WORKDIR /usr/src/my-django-project
COPY my-django-project .
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["gunicorn","--bind","0.0.0.0:8000","wsgi"]
# CMD ["python","manage.py", "runserver", "0.0.0.0:8000"]
Any help would be grateful!
To help you debugging:
What is the status of the job when you are trying to access your webapp.
Figure out which instance the job is running and try docker ps on that ecs instance for the running job.
If you are able see the container or the job running on the instance, try access your webapp directly on the server with command like curl http://localhost:8000 or wget
If you container is not running. Try docker ps -a and see which one has just stopped and check with docker logs -f
With this approach, you can cut out all AWS firewall settings, so that you can see if your container is configured correctly. I think it will help you tracking down the issue easier.
After you figuring out the container is running fine and you are able to request with localhost, then you can work on security group inbound/outbound filter.
I can connect to my local Postgres DB in my web app, but NOT if I am running the web app inside of a Docker container.
Web app running inside of a Docker container
Postgres running in the Host machine
I am not sure if it is related to the Postgres connection settings or to the Docker network settings.
Follow my settings and commands:
Host:
OSX 10.11.6
PostgreSQL 9.6
Docker container
Docker 1.13.1
Docker-machine 0.9.0
Docker container OS: python:3.6.0-alpine
Python 3.6 + psycopg2==2.7
postgresql.conf:
listen_addresses = '*'
pg_hba.conf
host all all 127.0.0.1/32 trust
host all all 0.0.0.0/0 trust
host all all 172.17.0.0/24 trust
host all all 192.168.99.0/24 trust
With Docker network in HOST mode
docker run -i --net=host -h 127.0.0.1 -e POSTGRES_URI=postgresql://127.0.0.1:5432/db my/image
Error:
could not connect to server: Connection refused
Is the server running
on host "127.0.0.1" and accepting TCP/IP connections on port 5432?
With Docker network in BRIDGE mode
docker run -i --add-host=dockerhost:`docker-machine ip ${DOCKER_MACHINE}` -e POSTGRES_URI=postgresql://dockerhost:5432/db -p 8000:8000 -p 5432:5432 my/image
Error:
server closed the connection unexpectedly
This probably means the
server terminated abnormally before or while processing the request.
Any ideas?
There is a note about doing this in the docs
I want to connect from a container to a service on the host
The Mac has a changing IP address (or none if you have no network access). Our current recommendation is to attach an unused IP to the lo0 interface on the Mac; for example: sudo ifconfig lo0 alias 10.200.10.1/24, and make sure that your service is listening on this address or 0.0.0.0 (ie not 127.0.0.1). Then containers can connect to this address.