I am creating Restful API using python, flask and docker. I have already created image and run container.
FROM python:2.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
When i run: docker run -p 5000:5000 flaskrestful and go the localhost:5000 i got the expected response:
{'hello': 'world'}
After editing method that return me json above, nothing is changed. I want the server in docker container to automatically reload after changing the project files on host machine.
Is there any way to do that? I have tried with volumes but to edit anything inside I need to use root privileges and I want to avoid that.
All I needed to do is to run container with several flags:
docker run -it --name containerName --mount type=bind,source=host directory,target=container direcotory -p host_port:container_port image_name
Related
I've create a python api using flask. If i try to run it on my local windows desktop works perfectly using the code below:
I'm trying to put this api inside a docker container and call it using the same script above. Below is the Dockerfile code:
FROM python:3
RUN pip install flask
RUN pip install flask_restful
RUN pip install sympy
WORKDIR /app
COPY . .
EXPOSE 8080
CMD ["python", "app/main.py"]
I'm running this container using this: docker run -p 8080:8080 searchitens
But i don't know what exactly i have to change on my test script to call this api. I'm having this response:
Can anyone help me?
I've tried to EXPOSE port 8080 and also modify the test script with base = 'http://127.0.0.1:8080/' and base = 'http://localhost:8080/'
I have a simple application I want to dockerize. It is a simple API that works correctly when I run it on my machine and is accessible on http://127.0.0.1:8000/
This is the dockerfile I created
FROM python:3.6-slim-stretch
WORKDIR /code
COPY requirements.txt /code
RUN pip install -r requirements.txt --no-cache-dir
COPY . /code
CMD ["uvicorn", "main:app", "--reload"]
I then create the image using this command sudo docker build -t test .
And then run it this way sudo docker run -d -p 8000:8000 test
The problem is that I cant access it http://127.0.0.1:8000/ even though I don't know the problem
PS: when I check my container ports I get 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp
I want to know what is causing this problem and how to fix it.
By default uvicorn listens on 127.0.0.1. 127.0.0.1 inside the container is private,it doesn't participate in portforwarding.
The solution is to do uvicorn --host 0.0.0.0, e.g.:
CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0"]
For an explanation of why this is the case, with diagrams, see https://pythonspeed.com/articles/docker-connection-refused/
Try accessing http://0.0.0.0:8000
What do you mean "I can access it"? Do you get permission denied?
404? what error are you seeing?
Try accessing inside the container: $docker exec -it test bash and
see if the program is running inside of it: $curl http://0.0.0.0:8000.
Try curling into the container's explicit ip.
get the ip: docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' test
I am trying to run a go web server and a flask server in the same docker container. I have 1 Docker file to build the flask app. How can I update the Dockerfile to build a container to run both python and golang.
ProjectFolder
pyfolder
/app.py,
Dockerfile
main.go
main.go
package main
import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Println("func called")
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
app.py
from flask import Flask
import os
app = Flask(__name__)
#app.route("/")
def hello():
return "Flask inside Docker!!"
if __name__ == "__main__":
port = int(os.environ.get("PORT", 5000))
app.run(debug=True,host='0.0.0.0',port=port)
Dockerfile
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
Since you have two separate programs, you would typically run these in two containers, with two separate images. In both cases you can use a basic Dockerfile for their respective languages:
# pyfolder/Dockerfile
FROM python:3.6
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt .
COPY . .
EXPOSE 5000
CMD ["./app.py"]
# ./Dockerfile
FROM golang:1.16-alpine AS build
COPY go.mod go.sum .
RUN go mod download
COPY main.go .
RUN go build -o main main.go
FROM alpine
COPY --from=build /go/main /usr/bin/main
EXPOSE 8080
CMD ["main"]
You don't discuss at all why you have two containers or how they communicate. You'll frequently want containers to have no local state at all if they can manage it, and communicate only over network interfaces like HTTP. This means you'll need some way to configure the network address one service uses to call another, since it will be different running in a local development environment vs. running in containers (vs. deployed to the cloud, vs. running in Kubernetes, vs....) An environment variable would be a typical approach; say the Go code needs to call the Python code:
url := os.Getenv("PYAPP_URL")
if url == "" {
url = "http://localhost:8080"
}
resp, err := http.Get(url)
A typical tool to run multiple containers together would be Docker Compose. It's not the only tool, but it's a standard part of the Docker ecosystem and it's simpler than many of the alternatives. You'd write a YAML file describing the two containers:
version: '3.8'
services:
pyapp:
build: ./pyfolder
server:
build: . # (the directory containing the Go code and its Dockerfile)
environment:
- PYAPP_URL=http://pyapp:5000
ports:
- '8080:8080'
depends_on:
- pyapp
Running docker-compose up --build will build the two images and start the two containers.
You should be able to accomplish this using a multistage build in your Dockerfile.
First, move the Dockerfile to the root directory next to the main.go.
Next, modify your Dockerfile to
# grab the golang image
FROM golang:1.16
WORKDIR .
RUN go build -o app .
# python portion
FROM python:3.6
WORKDIR /app
# copy the go binary from stage 0
COPY --from=0 app ./
# start the go binary
CMD ["sh", "app"]
# start python
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
I do think The Fool is correct in suggesting to separate these into two different containers. I would look into docker-compose instead of using a multi-stage build for this use.
Docker Multistage Build Reference
I have a Docker-application, that i build and run with:
docker build -t swagger_server .
docker run -p 8080:8080 swagger_server
The Dockerfile looks like this:
FROM python:3-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 8080
ENTRYPOINT ["python3"]
CMD ["-m", "swagger_server"]
This in and of itself is fairly simple, but I'm struggling with deploying this Dockerfile to Heroku. I have connected Heroku to auto-deploy on every push, but haven't configured anything up until now. It builds and runs the application successfully, but i think it only runs the python-application without exposing any ports.
Heroku have a documentation-page on their website, however I don't understand how to specify ports or build-tags in heroku.yml.
To give some more context: I want to deploy a Python/Flask-Application that was auto-generated by the swagger-codegen. I can access the API locally, no matter if I run it within a conda-environment or with docker.
Can somebody explain to me how that should work?
When you deploy with Docker with Heroku, EXPOSE port manually in Docker won't be respected, the port to be exposed will be determined automatically by Heroku.
Your app must listening in $PORT (env set by Heroku).
Another thing to note is, when you start swagger server, you must allow traffic coming from all IPs, otherwise it'll only be reachable in localhost (notice that this localhost is the container itself).
import os
app.run(host="0.0.0.0", port=os.getenv('PORT'))
I'm following Flask Web Development [2nd ed.] by Miguel Grinberg. In Part III Chapter 17 it instructs how to deploy a project to Docker. I'm using Ubuntu 18.04 LTS on VMware.
I successfully build a container image by running docker build -t flasky:latest ..
Running docker images I verify that the image was succesfully created.
I fail at running the container using:
docker run --name flasky -d -p 8000:5000 \
-e SECRET_KEY=<secret_key> \
-e MAIL_USERNAME=<my_email> \
-e MAIL_PASSWORD=<my_password> flasky:latest
As a result I get this error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"./boot.sh\": permission denied": unknown.
I tried modifying permissions with chmod, but to no avail. Then again, maybe I did it wrong.
Dockerfile:
FROM python:3.6-alpine
ENV FLASK_APP flasky.py
ENV FLASK_CONFIG docker
RUN adduser -D flasky
USER flasky
WORKDIR /home/flasky
COPY requirements requirements
RUN python -m venv venv
RUN venv/bin/pip install -r requirements/docker.txt
COPY app app
COPY migrations migrations
COPY flasky.py config.py boot.sh ./
# runtime configuration
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
boot.sh:
#!/bin/sh
source venv/bin/activate
flask deploy
exec gunicorn -b 0.0.0.0:5000 --access-logfile - --error-logfile - flasky:app
I tried solutions from here and here. The problem persists. Any ideas how to solve it?
TL;DR: chmod a+x boot.sh or chmod o+x boot.sh
You are running as user flasky inside the container USER flasky and as a result executing the boot.sh script as that user. The problem here is that flasky do not have permission to execute the script.
Let's say you are running as user app_user under group app_group in your host machine and tried to give the script execution right like chmod +x boot.sh. It will only allow users app_user to execute the script.
If you execute the command chmod g+x boot.sh, it will allow any user that belongs to the group app_group to be able to execute it.
Since we never specify any id for app_user in host machine nor flasky user in the container, you will have to run the command chmod a+x boot.sh or chmod o+x boot.sh which give other users the permission to execute boot.sh.
The reason for all this trouble is because in Linux, the user id in a container is mapped directly to the user id in the host machine.
chmod +x boot.sh should solve your problem. I could reproduce the issue when chmod +x was not done-
root#qa9phx:~/amitp/p3# docker run -it 62591cab9f07 bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"./boot.sh\": permission denied": unknown.
Here's the sample Dockerfile that I used-
FROM python:3.6-alpine
COPY boot.sh ./
# runtime configuration
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
Here's the sample boot.sh
#!/bin/sh
echo "hello world"
Run the following command before building the docker
chmod +x boot.sh
Then docker build-
docker build -t flasky:latest .
Docker images listing-
root#qa9phx:~/amitp/p3# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
flasky latest 6d10284c0d9e 8 minutes ago 94.6MB
Docker run command-
root#qa9phx:~/amitp/p3# docker run -it 6d10284c0d9e bash
hello world