flask app not running automatically from Dockerfile - python

My simple flask app is not automatically starting when I run in docker, though I have added CMD command correctly. I am able to run flask using python3 /app/app.py manually from container shell. Hence, no issue with code or command
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y gcc libffi-dev libssl-dev
RUN apt-get install -y libxml2-dev xmlsec1
RUN apt-get install -y python3-pip python3-dev
RUN pip3 --no-cache-dir install --upgrade pip
RUN rm -rf /var/lib/apt/lists/*
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN pip3 install -r requirements.txt
EXPOSE 5000
CMD ["/usr/bin/python3", "/app/app.py"]
I run docker container as
docker run -it okta /bin/bash
When I log in to docker container and run "ps -eaf" on Ubuntu shell of container, I do not see flask process running. So my question is why below line did not work in Dockerfile?
CMD ["/usr/bin/python3", "/app/app.py"]

Running your docker container and passing the command /bin/bash is overriding the CMD ["/usr/bin/python3", "/app/app.py"] in your Dockerfile.
CMD vs ENTRYPOINT Explained Here
Try changing the last line of your Dockerfile to
ENTRYPOINT ["/usr/bin/python3", "/app/app.py"]
Don't forget to rebuild your image after changing.
Or... you can omit the /bin/bash from the end of your docker run command and see if your app.py starts up successfully.

Related

Copy go package from previous stage based go image?

In image based Python, I want to run one command (*) using go package gnostic:
RUN gnostic --grpc-out=test test/openapi/loyalty-bff.yaml
I did wrote following dockerfile:
FROM golang:1.17 AS golang
RUN go get -u github.com/google/gnostic#latest
RUN go get -u github.com/googleapis/gnostic-grpc#latest
FROM python:3.7.10
WORKDIR /app
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
COPY --from=golang /go/bin/gnostic /go/bin/gnostic-grpc
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5000"]
I got error when run the command (*):
Command 'gnostic --grpc-out=loyalty-bff-1634180849463365375 loyalty-bff-1634180849463365375/loyalty-bff.yaml' returned non-zero exit status 127.
On the other hand, i can run when not use multi stages. Replace by install python in image based go, but building time is very long:
FROM golang:1.17
WORKDIR /app
RUN go get -u github.com/google/gnostic#latest
RUN go get -u github.com/googleapis/gnostic-grpc#latest
RUN apt-get update
RUN apt-get install -y build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libsqlite3-dev libreadline-dev libffi-dev wget libbz2-dev
RUN wget https://www.python.org/ftp/python/3.7.8/Python-3.7.8.tgz
RUN tar -xf Python-3.7.8.tgz
RUN cd Python-3.7.8 \
&& ./configure --enable-shared \
&& make && make install
RUN apt-get install python3-pip -y
ADD requirements.txt /app/
RUN pip3 install -r requirements.txt
ADD . /app/
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5000"]
Exit code 127 usually means can't find the executable.
If you have a look for env of golang:1.17, you could see by default the PATH has /go/bin:
$ docker run --rm -it golang:1.17 env
PATH=/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=a9b7efb588ea
TERM=xterm
GOLANG_VERSION=1.17.2
GOPATH=/go
HOME=/root
This is the reason why you could find gnostic in golang based container.
But, in python:3.7.10, it's next:
$ docker run --rm -it python:3.7.10 env
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So, for your scenario, change the copy to next could make it work:
COPY --from=golang /go/bin/gnostic /go/bin/gnostic-grpc /usr/local/bin/

How can I run a Docker command after building?

I have a Dockerfile:
FROM ubuntu:18.04
RUN apt-get -y update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update -y
RUN apt-get install -y python3.7 build-essential python3-pip
RUN pip3 install --upgrade pip
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
ENV FLASK_APP application.py
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT python3 -m flask run --host=0.0.0.0
But I want to also run python3 download.py before running the ENTRYPOINT. If I put it in here, and then build, then it executes here. I need it to execute only on ElasticBeanstalk.
How would I do that?
There's a pattern of using the Docker ENTRYPOINT to do first-time setup, and then launching the CMD. For example, you could write an entrypoint script like
#!/bin/sh
# Do the first-time setup
python3 download.py
# Run the CMD
exec "$#"
Since this is a shell script, you can include whatever logic or additional setup you need here.
In your Dockerfile, you need to change your ENTRYPOINT line to CMD, COPY in this script, and set it as the image's ENTRYPOINT.
...
COPY . /app
...
# If the script isn't already executable on the host
RUN chmod +x entrypoint.sh
# Must use JSON-array syntax
ENTRYPOINT ["/app/entrypoint.sh"]
# The same command as originally
CMD python3 -m flask run --host=0.0.0.0
If you want to debug this, since this setup honors the "command" part, you can run a one-off container that launches an interactive shell instead of the Flask process. This will still do the first-time setup, but then run the command from the docker run command instead of what was in the CMD line.
docker run --rm -it myimage bash
You can control whether you run the python3 download.py using environment variables. And then running locally you do docker run -e....

Docker python output csv file

I have a script python which should output a file csv. I'm trying to have this file in the current working directory but without success.
This is my Dockerfile
FROM python:3.6.4
RUN apt-get update && apt-get install -y libaio1 wget unzip
WORKDIR /opt/oracle
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-
basiclite-linuxx64.zip && \ unzip instantclient-basiclite-linuxx64.zip && rm
-f instantclient-basiclite-linuxx64.zip && \ cd /opt/oracle/instantclient*
&& rm -f jdbc occi mysql *README jar uidrvci genezi adrci && \ echo
/opt/oracle/instantclient > /etc/ld.so.conf.d/oracle-instantclient.conf &&
ldconfig
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install pystan
RUN apt-get -y update && python3 -m pip install cx_Oracle --upgrade
RUN pip install -r requirements.txt
CMD [ "python", "Main.py" ]
And run the container with the following command
docker container run -v $pwd:/home/learn/rstudio_script/output image
This is bad practice to bind a volume just to have 1 file on your container be saved onto your host.
Instead, what you should leverage is the copy command:
docker cp <containerId>:/file/path/within/container /host/path/target
You can set this command to auto execute with bash, after your docker run.
So something like:
#!/bin/bash
# this stores the container id
CONTAINER_ID=$(docker run -dit img)
docker cp $CONTAINER_ID:/some_path host_path
If you are adamant on using a bind volume, then as the others have pointed out, the issue is most likely your python script isn't outputting the csv to the correct path.
Your script Main.py is probably not trying to write to /home/learn/rstudio_script/output. The working directory in the container is /app because of the last WORKDIR directive in the Dockerfile. You can override that at runtime with --workdir but then the CMD would have to be changed as well.
One solution is to have your script write files to /output/ and then run it like this:
docker container run -v $PWD:/output/ image

Insert data after mysql started in a docker container

I am using docker to start mysql service in a container. After the container starts, I want to insert some data to database automatically via python scripts. This is my Dockerfile:
FROM mysql:5.7
EXPOSE 3306
ENV MYSQL_ROOT_PASSWORD 123456
WORKDIR /app
ADD . /app
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip
RUN pip3 install --user -r requirements.txt
RUN python3 init.py
The last row runs the script to add some data to database but this time mysql service has not started yet so it fails when running docker build. How do I accomplish this? Thanks in advance.
According to the docs, the MySQL entrypoint will automatically execute any files with .sh, .gz or .sql scripts found in /docker-entrypoint-initdb.d. So, create a script to execute your Python script for you. If you call this file 01-my-script.sh, your Dockerfile will look like this:
FROM mysql:5.7
EXPOSE 3306
ENV MYSQL_ROOT_PASSWORD 123456
WORKDIR /app
RUN apt-get update && apt-get install -y \
python3 \
python3-pip
# Copy requirements in first, and run them (so cache won't be invalidated)
COPY ./requirements.txt ./requirements.txt
RUN pip3 install --user -r requirements.txt
# Copy SQL Fixture
COPY ./01-my-script.sh /docker-entrypoint-initdb.d/01-my-script.sh
RUN chmod +x /docker-entrypoint-initdb.d/01-my-script.sh
# Copy the rest of your project
COPY . .
And your script will only contain:
#!/bin/sh
python3 /app/init.py
Now, when you bring up your container, your script will execute. Monitor the execution of the running container with docker logs -f <container_name> to make sure your script is running.

Flask application on Docker with Let's Encrypt

I want to create a Flask application in a Docker instance that has HTTPS enabled using the Let's Encrypt method of obtaining an SSL Cert. The cert also needs to be auto-renewed every so often (3 months I think), which is already done on my server but the Flask app needs to get access to the file also!
What would I need to modify on this Docker file to enable Let's encrypt?
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["web/app.py"]
You can use the docker volume feature:
A volume is a mount directory between the host machine and the container.
There is two ways to create a volume with docker:
You can declare a VOLUME command in the dockerfile
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
VOLUME /certdir
ENTRYPOINT ["python"]
CMD ["web/app.py"]
This will create a directory named after the container id inside /var/lib/docker/volumes.
This solution is more useful when you want to share something from the container to the host but is not very practical when it's the other way around.
You can use the -v flag on docker create or docker run to add a volume to the container:
docker run -v /certdir ./certdir web/app.py
Where /certdir is the directory /certdir inside the container and ./certdir is the one on the host inside your project directory.
This solution will work since the host directory will be mounted inside the container at the defined location. But without specifying it clearly in some documentation or provide a easy to use alias for your docker run/create command other user will not know how to define it.
PS:quick tip:
Put your RUN commands inside one single statement:
```
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y \
&& apt-get install -y python-pip python-dev build-essential \
&& pip install --upgrade pip
COPY . /app
```
The advantage is docker will create only one layer for the installation of dependencies instead of three. (see documentation)

Categories

Resources