Auto-Deploy Docker-Python application with Heroku - python

I have a Docker-application, that i build and run with:
docker build -t swagger_server .
docker run -p 8080:8080 swagger_server
The Dockerfile looks like this:
FROM python:3-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 8080
ENTRYPOINT ["python3"]
CMD ["-m", "swagger_server"]
This in and of itself is fairly simple, but I'm struggling with deploying this Dockerfile to Heroku. I have connected Heroku to auto-deploy on every push, but haven't configured anything up until now. It builds and runs the application successfully, but i think it only runs the python-application without exposing any ports.
Heroku have a documentation-page on their website, however I don't understand how to specify ports or build-tags in heroku.yml.
To give some more context: I want to deploy a Python/Flask-Application that was auto-generated by the swagger-codegen. I can access the API locally, no matter if I run it within a conda-environment or with docker.
Can somebody explain to me how that should work?

When you deploy with Docker with Heroku, EXPOSE port manually in Docker won't be respected, the port to be exposed will be determined automatically by Heroku.
Your app must listening in $PORT (env set by Heroku).
Another thing to note is, when you start swagger server, you must allow traffic coming from all IPs, otherwise it'll only be reachable in localhost (notice that this localhost is the container itself).
import os
app.run(host="0.0.0.0", port=os.getenv('PORT'))

Related

Call api running on docker container

I've create a python api using flask. If i try to run it on my local windows desktop works perfectly using the code below:
I'm trying to put this api inside a docker container and call it using the same script above. Below is the Dockerfile code:
FROM python:3
RUN pip install flask
RUN pip install flask_restful
RUN pip install sympy
WORKDIR /app
COPY . .
EXPOSE 8080
CMD ["python", "app/main.py"]
I'm running this container using this: docker run -p 8080:8080 searchitens
But i don't know what exactly i have to change on my test script to call this api. I'm having this response:
Can anyone help me?
I've tried to EXPOSE port 8080 and also modify the test script with base = 'http://127.0.0.1:8080/' and base = 'http://localhost:8080/'

Docker container ports setup incorrectly

I have a simple application I want to dockerize. It is a simple API that works correctly when I run it on my machine and is accessible on http://127.0.0.1:8000/
This is the dockerfile I created
FROM python:3.6-slim-stretch
WORKDIR /code
COPY requirements.txt /code
RUN pip install -r requirements.txt --no-cache-dir
COPY . /code
CMD ["uvicorn", "main:app", "--reload"]
I then create the image using this command sudo docker build -t test .
And then run it this way sudo docker run -d -p 8000:8000 test
The problem is that I cant access it http://127.0.0.1:8000/ even though I don't know the problem
PS: when I check my container ports I get 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp
I want to know what is causing this problem and how to fix it.
By default uvicorn listens on 127.0.0.1. 127.0.0.1 inside the container is private,it doesn't participate in portforwarding.
The solution is to do uvicorn --host 0.0.0.0, e.g.:
CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0"]
For an explanation of why this is the case, with diagrams, see https://pythonspeed.com/articles/docker-connection-refused/
Try accessing http://0.0.0.0:8000
What do you mean "I can access it"? Do you get permission denied?
404? what error are you seeing?
Try accessing inside the container: $docker exec -it test bash and
see if the program is running inside of it: $curl http://0.0.0.0:8000.
Try curling into the container's explicit ip.
get the ip: docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' test

Error 99 connecting to localhost:6379. Cannot assign requested address. when running app connected to redis

I'm building this python app inside a docker container, but cannot access the Redis port inside it.
Inside my Dockerfile I added the line:
EXPOSE 6379
but running the app raises the error redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I have to mention that my app only pops items from the redis queue.
Edit:
My Dockerfile looks like this:
FROM python:3.9
ARG DATABASE
ARG SSL_TOKEN_PATH
ENV DATABASE=$DATABASE
ENV SSL_TOKEN_PATH=$SSL_TOKEN_PATH
COPY config/db_config.yaml db_config.yaml
COPY app/requirements.txt requirements.txt
ENV DB_CONFIG=config/db_config.yaml
COPY app/pydb/requirements.txt pydb/requirements.txt
RUN pip3 install -r requirements.txt
RUN pip3 install -r pydb/requirements.txt
COPY app/ .
EXPOSE 6379
CMD ["python3", "-u", "process_file.py"]
The solution to this problem was to create a network in docker using the:
docker network create <name_of_network>
As host inside redis you must provide: host.docker.internal.
Finally run the container inside the created network using:
docker run --network <name_of_network> -d <image_name>
That solved the issue, and I didn't have to create new container just for redis.

Docker flask restful app using host files

I am creating Restful API using python, flask and docker. I have already created image and run container.
FROM python:2.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
When i run: docker run -p 5000:5000 flaskrestful and go the localhost:5000 i got the expected response:
{'hello': 'world'}
After editing method that return me json above, nothing is changed. I want the server in docker container to automatically reload after changing the project files on host machine.
Is there any way to do that? I have tried with volumes but to edit anything inside I need to use root privileges and I want to avoid that.
All I needed to do is to run container with several flags:
docker run -it --name containerName --mount type=bind,source=host directory,target=container direcotory -p host_port:container_port image_name

running docker project on localhost but nothing is coming up

I have a docker image that was build. I want to run it on localhost and see the project on my local machine. When i run the image, i see it in the docker ps. when i go to localhost, I get nothing. I am not sure if it is the port that i am trying to run or permissions, or something like that...
here is the command i am running:
λ docker run -p 5050:80 opentab
here is after the image was built:
Successfully built 7e995fbdf2ea
Successfully tagged opentab:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
here is the docker file:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
EXPOSE 80
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver"]
I had to remove the following from the requirements file because they were causing errors:
python
sqlite
tk
zx
Two things:
1) manage.py runserver will bind to localhost only by default. This localhost is INSIDE the container, not on your computer. As such, you want to bind the application to all interfaces like this:
python manage.py runserver 0.0.0.0
2) manage.py runserver will bind to port 8000 by default. So, two options. One, you can make manage.py runserver bind to port 80, like so:
python manage.py runserver 0.0.0.0.0:80
Or, you would change your dockerfile, replacing EXPOSE 80 with EXPOSE 8000, and change your docker command to docker run -p 5050:8000 opentab

Categories

Resources