Docker container ports setup incorrectly - python

I have a simple application I want to dockerize. It is a simple API that works correctly when I run it on my machine and is accessible on http://127.0.0.1:8000/
This is the dockerfile I created
FROM python:3.6-slim-stretch
WORKDIR /code
COPY requirements.txt /code
RUN pip install -r requirements.txt --no-cache-dir
COPY . /code
CMD ["uvicorn", "main:app", "--reload"]
I then create the image using this command sudo docker build -t test .
And then run it this way sudo docker run -d -p 8000:8000 test
The problem is that I cant access it http://127.0.0.1:8000/ even though I don't know the problem
PS: when I check my container ports I get 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp
I want to know what is causing this problem and how to fix it.

By default uvicorn listens on 127.0.0.1. 127.0.0.1 inside the container is private,it doesn't participate in portforwarding.
The solution is to do uvicorn --host 0.0.0.0, e.g.:
CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0"]
For an explanation of why this is the case, with diagrams, see https://pythonspeed.com/articles/docker-connection-refused/

Try accessing http://0.0.0.0:8000
What do you mean "I can access it"? Do you get permission denied?
404? what error are you seeing?
Try accessing inside the container: $docker exec -it test bash and
see if the program is running inside of it: $curl http://0.0.0.0:8000.
Try curling into the container's explicit ip.
get the ip: docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' test

Related

Error 99 connecting to localhost:6379. Cannot assign requested address. when running app connected to redis

I'm building this python app inside a docker container, but cannot access the Redis port inside it.
Inside my Dockerfile I added the line:
EXPOSE 6379
but running the app raises the error redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I have to mention that my app only pops items from the redis queue.
Edit:
My Dockerfile looks like this:
FROM python:3.9
ARG DATABASE
ARG SSL_TOKEN_PATH
ENV DATABASE=$DATABASE
ENV SSL_TOKEN_PATH=$SSL_TOKEN_PATH
COPY config/db_config.yaml db_config.yaml
COPY app/requirements.txt requirements.txt
ENV DB_CONFIG=config/db_config.yaml
COPY app/pydb/requirements.txt pydb/requirements.txt
RUN pip3 install -r requirements.txt
RUN pip3 install -r pydb/requirements.txt
COPY app/ .
EXPOSE 6379
CMD ["python3", "-u", "process_file.py"]
The solution to this problem was to create a network in docker using the:
docker network create <name_of_network>
As host inside redis you must provide: host.docker.internal.
Finally run the container inside the created network using:
docker run --network <name_of_network> -d <image_name>
That solved the issue, and I didn't have to create new container just for redis.

Auto-Deploy Docker-Python application with Heroku

I have a Docker-application, that i build and run with:
docker build -t swagger_server .
docker run -p 8080:8080 swagger_server
The Dockerfile looks like this:
FROM python:3-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 8080
ENTRYPOINT ["python3"]
CMD ["-m", "swagger_server"]
This in and of itself is fairly simple, but I'm struggling with deploying this Dockerfile to Heroku. I have connected Heroku to auto-deploy on every push, but haven't configured anything up until now. It builds and runs the application successfully, but i think it only runs the python-application without exposing any ports.
Heroku have a documentation-page on their website, however I don't understand how to specify ports or build-tags in heroku.yml.
To give some more context: I want to deploy a Python/Flask-Application that was auto-generated by the swagger-codegen. I can access the API locally, no matter if I run it within a conda-environment or with docker.
Can somebody explain to me how that should work?
When you deploy with Docker with Heroku, EXPOSE port manually in Docker won't be respected, the port to be exposed will be determined automatically by Heroku.
Your app must listening in $PORT (env set by Heroku).
Another thing to note is, when you start swagger server, you must allow traffic coming from all IPs, otherwise it'll only be reachable in localhost (notice that this localhost is the container itself).
import os
app.run(host="0.0.0.0", port=os.getenv('PORT'))

docker run results in starting container process caused "exec: \"./boot.sh\": permission denied

I'm following Flask Web Development [2nd ed.] by Miguel Grinberg. In Part III Chapter 17 it instructs how to deploy a project to Docker. I'm using Ubuntu 18.04 LTS on VMware.
I successfully build a container image by running docker build -t flasky:latest ..
Running docker images I verify that the image was succesfully created.
I fail at running the container using:
docker run --name flasky -d -p 8000:5000 \
-e SECRET_KEY=<secret_key> \
-e MAIL_USERNAME=<my_email> \
-e MAIL_PASSWORD=<my_password> flasky:latest
As a result I get this error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"./boot.sh\": permission denied": unknown.
I tried modifying permissions with chmod, but to no avail. Then again, maybe I did it wrong.
Dockerfile:
FROM python:3.6-alpine
ENV FLASK_APP flasky.py
ENV FLASK_CONFIG docker
RUN adduser -D flasky
USER flasky
WORKDIR /home/flasky
COPY requirements requirements
RUN python -m venv venv
RUN venv/bin/pip install -r requirements/docker.txt
COPY app app
COPY migrations migrations
COPY flasky.py config.py boot.sh ./
# runtime configuration
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
boot.sh:
#!/bin/sh
source venv/bin/activate
flask deploy
exec gunicorn -b 0.0.0.0:5000 --access-logfile - --error-logfile - flasky:app
I tried solutions from here and here. The problem persists. Any ideas how to solve it?
TL;DR: chmod a+x boot.sh or chmod o+x boot.sh
You are running as user flasky inside the container USER flasky and as a result executing the boot.sh script as that user. The problem here is that flasky do not have permission to execute the script.
Let's say you are running as user app_user under group app_group in your host machine and tried to give the script execution right like chmod +x boot.sh. It will only allow users app_user to execute the script.
If you execute the command chmod g+x boot.sh, it will allow any user that belongs to the group app_group to be able to execute it.
Since we never specify any id for app_user in host machine nor flasky user in the container, you will have to run the command chmod a+x boot.sh or chmod o+x boot.sh which give other users the permission to execute boot.sh.
The reason for all this trouble is because in Linux, the user id in a container is mapped directly to the user id in the host machine.
chmod +x boot.sh should solve your problem. I could reproduce the issue when chmod +x was not done-
root#qa9phx:~/amitp/p3# docker run -it 62591cab9f07 bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"./boot.sh\": permission denied": unknown.
Here's the sample Dockerfile that I used-
FROM python:3.6-alpine
COPY boot.sh ./
# runtime configuration
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
Here's the sample boot.sh
#!/bin/sh
echo "hello world"
Run the following command before building the docker
chmod +x boot.sh
Then docker build-
docker build -t flasky:latest .
Docker images listing-
root#qa9phx:~/amitp/p3# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
flasky latest 6d10284c0d9e 8 minutes ago 94.6MB
Docker run command-
root#qa9phx:~/amitp/p3# docker run -it 6d10284c0d9e bash
hello world

Docker flask restful app using host files

I am creating Restful API using python, flask and docker. I have already created image and run container.
FROM python:2.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
When i run: docker run -p 5000:5000 flaskrestful and go the localhost:5000 i got the expected response:
{'hello': 'world'}
After editing method that return me json above, nothing is changed. I want the server in docker container to automatically reload after changing the project files on host machine.
Is there any way to do that? I have tried with volumes but to edit anything inside I need to use root privileges and I want to avoid that.
All I needed to do is to run container with several flags:
docker run -it --name containerName --mount type=bind,source=host directory,target=container direcotory -p host_port:container_port image_name

running docker project on localhost but nothing is coming up

I have a docker image that was build. I want to run it on localhost and see the project on my local machine. When i run the image, i see it in the docker ps. when i go to localhost, I get nothing. I am not sure if it is the port that i am trying to run or permissions, or something like that...
here is the command i am running:
λ docker run -p 5050:80 opentab
here is after the image was built:
Successfully built 7e995fbdf2ea
Successfully tagged opentab:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
here is the docker file:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
EXPOSE 80
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver"]
I had to remove the following from the requirements file because they were causing errors:
python
sqlite
tk
zx
Two things:
1) manage.py runserver will bind to localhost only by default. This localhost is INSIDE the container, not on your computer. As such, you want to bind the application to all interfaces like this:
python manage.py runserver 0.0.0.0
2) manage.py runserver will bind to port 8000 by default. So, two options. One, you can make manage.py runserver bind to port 80, like so:
python manage.py runserver 0.0.0.0.0:80
Or, you would change your dockerfile, replacing EXPOSE 80 with EXPOSE 8000, and change your docker command to docker run -p 5050:8000 opentab

Categories

Resources