running docker project on localhost but nothing is coming up - python

I have a docker image that was build. I want to run it on localhost and see the project on my local machine. When i run the image, i see it in the docker ps. when i go to localhost, I get nothing. I am not sure if it is the port that i am trying to run or permissions, or something like that...
here is the command i am running:
λ docker run -p 5050:80 opentab
here is after the image was built:
Successfully built 7e995fbdf2ea
Successfully tagged opentab:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
here is the docker file:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
EXPOSE 80
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver"]
I had to remove the following from the requirements file because they were causing errors:
python
sqlite
tk
zx

Two things:
1) manage.py runserver will bind to localhost only by default. This localhost is INSIDE the container, not on your computer. As such, you want to bind the application to all interfaces like this:
python manage.py runserver 0.0.0.0
2) manage.py runserver will bind to port 8000 by default. So, two options. One, you can make manage.py runserver bind to port 80, like so:
python manage.py runserver 0.0.0.0.0:80
Or, you would change your dockerfile, replacing EXPOSE 80 with EXPOSE 8000, and change your docker command to docker run -p 5050:8000 opentab

Related

Docker container ports setup incorrectly

I have a simple application I want to dockerize. It is a simple API that works correctly when I run it on my machine and is accessible on http://127.0.0.1:8000/
This is the dockerfile I created
FROM python:3.6-slim-stretch
WORKDIR /code
COPY requirements.txt /code
RUN pip install -r requirements.txt --no-cache-dir
COPY . /code
CMD ["uvicorn", "main:app", "--reload"]
I then create the image using this command sudo docker build -t test .
And then run it this way sudo docker run -d -p 8000:8000 test
The problem is that I cant access it http://127.0.0.1:8000/ even though I don't know the problem
PS: when I check my container ports I get 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp
I want to know what is causing this problem and how to fix it.
By default uvicorn listens on 127.0.0.1. 127.0.0.1 inside the container is private,it doesn't participate in portforwarding.
The solution is to do uvicorn --host 0.0.0.0, e.g.:
CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0"]
For an explanation of why this is the case, with diagrams, see https://pythonspeed.com/articles/docker-connection-refused/
Try accessing http://0.0.0.0:8000
What do you mean "I can access it"? Do you get permission denied?
404? what error are you seeing?
Try accessing inside the container: $docker exec -it test bash and
see if the program is running inside of it: $curl http://0.0.0.0:8000.
Try curling into the container's explicit ip.
get the ip: docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' test

Error 99 connecting to localhost:6379. Cannot assign requested address. when running app connected to redis

I'm building this python app inside a docker container, but cannot access the Redis port inside it.
Inside my Dockerfile I added the line:
EXPOSE 6379
but running the app raises the error redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I have to mention that my app only pops items from the redis queue.
Edit:
My Dockerfile looks like this:
FROM python:3.9
ARG DATABASE
ARG SSL_TOKEN_PATH
ENV DATABASE=$DATABASE
ENV SSL_TOKEN_PATH=$SSL_TOKEN_PATH
COPY config/db_config.yaml db_config.yaml
COPY app/requirements.txt requirements.txt
ENV DB_CONFIG=config/db_config.yaml
COPY app/pydb/requirements.txt pydb/requirements.txt
RUN pip3 install -r requirements.txt
RUN pip3 install -r pydb/requirements.txt
COPY app/ .
EXPOSE 6379
CMD ["python3", "-u", "process_file.py"]
The solution to this problem was to create a network in docker using the:
docker network create <name_of_network>
As host inside redis you must provide: host.docker.internal.
Finally run the container inside the created network using:
docker run --network <name_of_network> -d <image_name>
That solved the issue, and I didn't have to create new container just for redis.

Django app while dockerizing gives "Starting development server at http://0.0.0.0:8000/" but doesn't show up on browser

So I am a beginner in docker and Django. What I have here is a django app which I am trying to dockerize and run. My requirements.txt has only django and gunicorn as the packages.
I am getting the below in terminal after building and running the docker image:
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
August 26, 2021 - 06:57:22
Django version 3.2.6, using settings 'myproject.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
Below is my Dockerfile:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
RUN mkdir /Django
WORKDIR /Django
ADD . /Django
RUN pip install -r requirements.txt
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
The commands I am using are:
docker build . -t myproj
docker run -p 8000:8000 myproj
I have tried adding allowedhosts = ['127.0.0.1'] in settings.py but still I am getting "The site can't be reached. 127.0.0.1 refused to connect.
Not able to see the "Congratulations" screen.
Please help me out with this.
P.s: I am using windows machine
Updates
I tried running the below line and got the following output:
docker exec 8e6c4e4a58db curl 127.0.0.1:8000
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"curl\": executable file not found in $PATH": unknown
Without your settings.py, this can be hard to figure out. You say you have allowedhosts = ['127.0.0.1'] in there and that should definitely not be necessary. It might actually be what's blocking your host, since requests from your host come from a different IP address.
I've made the following Dockerfile that creates a starter project and runs it
FROM python:latest
RUN python -m pip install Django
RUN django-admin startproject mysite
WORKDIR /mysite
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
If I build it and run it with
docker build -t mysite .
docker run -d -p 8000:8000 mysite
I can connect to http://localhost:8000/ on my machine and get the default page (I'm on Windows too).
I hope that helps you to locate your issue.
PS: Your curl command fails because curl isn't installed in your image, so it failing has nothing to do with your issue.

Auto-Deploy Docker-Python application with Heroku

I have a Docker-application, that i build and run with:
docker build -t swagger_server .
docker run -p 8080:8080 swagger_server
The Dockerfile looks like this:
FROM python:3-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 8080
ENTRYPOINT ["python3"]
CMD ["-m", "swagger_server"]
This in and of itself is fairly simple, but I'm struggling with deploying this Dockerfile to Heroku. I have connected Heroku to auto-deploy on every push, but haven't configured anything up until now. It builds and runs the application successfully, but i think it only runs the python-application without exposing any ports.
Heroku have a documentation-page on their website, however I don't understand how to specify ports or build-tags in heroku.yml.
To give some more context: I want to deploy a Python/Flask-Application that was auto-generated by the swagger-codegen. I can access the API locally, no matter if I run it within a conda-environment or with docker.
Can somebody explain to me how that should work?
When you deploy with Docker with Heroku, EXPOSE port manually in Docker won't be respected, the port to be exposed will be determined automatically by Heroku.
Your app must listening in $PORT (env set by Heroku).
Another thing to note is, when you start swagger server, you must allow traffic coming from all IPs, otherwise it'll only be reachable in localhost (notice that this localhost is the container itself).
import os
app.run(host="0.0.0.0", port=os.getenv('PORT'))

django runserver hangs in docker-compose up but runs correctly in docker-compose run

Edit
Adding --ipv6 to the command, while not properly configured for, seem to surpass the point where the process hangs.
Problem
Calling docker-compose up executes runserver but hangs at some point after printing the current time.
Calling docker-compose run -p 8000:8000 web python manage.py runserver 0.0.0.0:8000 also execute the server, but does so succesfully and can be reached at 192.168.99.100:8000.
Questions
How come I can run the server directly from docker-compose in my shell but not from the .yml file?
To me, the content of the .yml file and the docker-compose run line from the shell are strikingly similar.
The only difference I can think of would perhaps be permissions at some level required to properly start a django server, but I don't know how to address that. Docker runs on a windows 8.1 machine. The shared folder for my virtual machine is the default c:\Users.
Files
My folder contain a fresh django project as well as these docker files. I've tampered with different versions of python and django but the result is the same. I've cleaned up my images and containers between attempts using
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
Dockerfile
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
requirements.txt
Django>=1.8,<2.0
System
My operative system is windows 8.1
I was hit by this issue myself and it seems that you need to allocate a tty and a stdin to your container in order to make runserver work:
python:
image: my-image:latest
stdin_open: true # docker run -i
tty: true # docker run -t
build:
context: ..
dockerfile: docker/Dockerfile
I had the same issue and could not get it to do anything else. However when i went to the ip of the docker machine docker-machine ip it returned 192.168.99.100, then by going to 192.168.99.100:8000 my docker container started receiving the requests

Categories

Resources