Here is my dockerfile:
FROM python:3.8
WORKDIR /locust
RUN pip3 install locust
COPY ./ /locust/
EXPOSE 8089
CMD ["locust", "-f", "locustfile.py"]
Here is the response:
Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
Starting Locust 1.2.3
But when I try to access it in the browser - it doesn't load. I feel like I might be missing something simple, but cannot find it.
This statement,
EXPOSE 8089
will only expose your port for inter-container communication, but not to the host.
For allowing host to communicate on the container port you will need to bind the port of host and container in the docker run command as follows
docker run -p <HOST_PORT>:<CONTAINER:PORT> IMAGE_NAME
which in your case will be
docker run -p 8089:8089 IMAGE_NAME
Related
I'm trying to containerize a nodejs app into a container image.
So in the docker documents we see the correct way to map the host port to the container port is hostport:containerport
Using the same, I've tried to run the command:
docker run -i -t -p 3007:8080
Where 3007 is the port my nodejs app is listening to and 8080 is the hostport(my laptop).
But I keep getting the error "localhost refused to connect" when I hit localhost:8080 in my browser.
It wasn't until I swapped these port numbers like:
docker run -i -t -p 8080:3007
To render the actual app (listening on port 3007) in my browser.
I'm confused as to why this happens? Am I missing any information?
I have a simple application I want to dockerize. It is a simple API that works correctly when I run it on my machine and is accessible on http://127.0.0.1:8000/
This is the dockerfile I created
FROM python:3.6-slim-stretch
WORKDIR /code
COPY requirements.txt /code
RUN pip install -r requirements.txt --no-cache-dir
COPY . /code
CMD ["uvicorn", "main:app", "--reload"]
I then create the image using this command sudo docker build -t test .
And then run it this way sudo docker run -d -p 8000:8000 test
The problem is that I cant access it http://127.0.0.1:8000/ even though I don't know the problem
PS: when I check my container ports I get 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp
I want to know what is causing this problem and how to fix it.
By default uvicorn listens on 127.0.0.1. 127.0.0.1 inside the container is private,it doesn't participate in portforwarding.
The solution is to do uvicorn --host 0.0.0.0, e.g.:
CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0"]
For an explanation of why this is the case, with diagrams, see https://pythonspeed.com/articles/docker-connection-refused/
Try accessing http://0.0.0.0:8000
What do you mean "I can access it"? Do you get permission denied?
404? what error are you seeing?
Try accessing inside the container: $docker exec -it test bash and
see if the program is running inside of it: $curl http://0.0.0.0:8000.
Try curling into the container's explicit ip.
get the ip: docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' test
I'm building this python app inside a docker container, but cannot access the Redis port inside it.
Inside my Dockerfile I added the line:
EXPOSE 6379
but running the app raises the error redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I have to mention that my app only pops items from the redis queue.
Edit:
My Dockerfile looks like this:
FROM python:3.9
ARG DATABASE
ARG SSL_TOKEN_PATH
ENV DATABASE=$DATABASE
ENV SSL_TOKEN_PATH=$SSL_TOKEN_PATH
COPY config/db_config.yaml db_config.yaml
COPY app/requirements.txt requirements.txt
ENV DB_CONFIG=config/db_config.yaml
COPY app/pydb/requirements.txt pydb/requirements.txt
RUN pip3 install -r requirements.txt
RUN pip3 install -r pydb/requirements.txt
COPY app/ .
EXPOSE 6379
CMD ["python3", "-u", "process_file.py"]
The solution to this problem was to create a network in docker using the:
docker network create <name_of_network>
As host inside redis you must provide: host.docker.internal.
Finally run the container inside the created network using:
docker run --network <name_of_network> -d <image_name>
That solved the issue, and I didn't have to create new container just for redis.
I have docker running on my Windows 10 OS and I have a minimal flask app
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(host ='0.0.0.0', port = 5001, debug = True)
And I am dockerizing it using the following file
FROM python:alpine3.7
COPY . /opt
WORKDIR /opt
RUN pip install -r requirements.txt
EXPOSE 5001
ENTRYPOINT [ "python" ]
CMD ["app.py", "run", "--host", "0.0.0.0"]
From what I am seeing on other posts and on Flask tutorials having a 0.0.0.0 should allow me to connect from the windows firefox browser when I type 0.0.0.0:5001 but it is not connecting, I keep getting a 'unable to connect' message. I remember using the 0.0.0.0:port to connect in localhost on a linux ubuntu machine but for whatever reason its not letting me connect on Windows. Is there a special setting to connect on windows ?
Inside the Docker container, the private port is 5001. This private port then needs to be mapped to a public port when running the container. For example, to set the public port to 8000, you could run:
$ docker run --publish 8000:5001 --name <docker-container> <docker-container>:<version-tag>
The Flask app would then be accessible at URL: http://127.0.0.1:8000
Dockerfile
In addition, since in app.py you are setting the host and port, there is no need to specify these values in the Dockerfile CMD. But the public port (in this example 8000) needs to be exposed.
It also looks like the COPY command is placing everything under an /opt directory, so that needs to be included in the app path when launching the Flask app within Docker.
FROM python:alpine3.7
COPY . /opt
WORKDIR /opt
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["python", "/opt/app.py"]
Docker-Flask Example
For a complete Flask-Docker example, including using Gunicorn, see:
Docker Python Flask Example using the Gunicorn WSGI HTTP Server
I have a Docker-application, that i build and run with:
docker build -t swagger_server .
docker run -p 8080:8080 swagger_server
The Dockerfile looks like this:
FROM python:3-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 8080
ENTRYPOINT ["python3"]
CMD ["-m", "swagger_server"]
This in and of itself is fairly simple, but I'm struggling with deploying this Dockerfile to Heroku. I have connected Heroku to auto-deploy on every push, but haven't configured anything up until now. It builds and runs the application successfully, but i think it only runs the python-application without exposing any ports.
Heroku have a documentation-page on their website, however I don't understand how to specify ports or build-tags in heroku.yml.
To give some more context: I want to deploy a Python/Flask-Application that was auto-generated by the swagger-codegen. I can access the API locally, no matter if I run it within a conda-environment or with docker.
Can somebody explain to me how that should work?
When you deploy with Docker with Heroku, EXPOSE port manually in Docker won't be respected, the port to be exposed will be determined automatically by Heroku.
Your app must listening in $PORT (env set by Heroku).
Another thing to note is, when you start swagger server, you must allow traffic coming from all IPs, otherwise it'll only be reachable in localhost (notice that this localhost is the container itself).
import os
app.run(host="0.0.0.0", port=os.getenv('PORT'))