How can I run a Docker command after building? - python

I have a Dockerfile:
FROM ubuntu:18.04
RUN apt-get -y update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update -y
RUN apt-get install -y python3.7 build-essential python3-pip
RUN pip3 install --upgrade pip
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
ENV FLASK_APP application.py
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT python3 -m flask run --host=0.0.0.0
But I want to also run python3 download.py before running the ENTRYPOINT. If I put it in here, and then build, then it executes here. I need it to execute only on ElasticBeanstalk.
How would I do that?

There's a pattern of using the Docker ENTRYPOINT to do first-time setup, and then launching the CMD. For example, you could write an entrypoint script like
#!/bin/sh
# Do the first-time setup
python3 download.py
# Run the CMD
exec "$#"
Since this is a shell script, you can include whatever logic or additional setup you need here.
In your Dockerfile, you need to change your ENTRYPOINT line to CMD, COPY in this script, and set it as the image's ENTRYPOINT.
...
COPY . /app
...
# If the script isn't already executable on the host
RUN chmod +x entrypoint.sh
# Must use JSON-array syntax
ENTRYPOINT ["/app/entrypoint.sh"]
# The same command as originally
CMD python3 -m flask run --host=0.0.0.0
If you want to debug this, since this setup honors the "command" part, you can run a one-off container that launches an interactive shell instead of the Flask process. This will still do the first-time setup, but then run the command from the docker run command instead of what was in the CMD line.
docker run --rm -it myimage bash

You can control whether you run the python3 download.py using environment variables. And then running locally you do docker run -e....

Related

Run file in conda inside docker

I have a python code that I attempt to wrap in a docker:
FROM continuumio/miniconda3
# Python 3.9.7 , Debian (use apt-get)
ENV TARGET=dev
RUN apt-get update
RUN apt-get install -y gcc
RUN apt-get install dos2unix
RUN apt-get install -y awscli
RUN conda install -y -c anaconda python=3.7
WORKDIR /app
COPY . .
RUN conda env create -f conda_env.yml
RUN echo "conda activate tensorflow_p36" >> ~/.bashrc
RUN pip install -r prod_requirements.txt
RUN pip install -r ./architectures/mask_rcnn/requirements.txt
RUN chmod +x aws_pipeline/set_env_vars.sh
RUN chmod +x aws_pipeline/start_gpu_aws.sh
RUN dos2unix aws_pipeline/set_env_vars.sh
RUN dos2unix aws_pipeline/start_gpu_aws.sh
RUN aws_pipeline/set_env_vars.sh $TARGET
Building the image works fine, running the image using the following commands works fine:
docker run --rm --name d4 -dit pd_v2 sh
My OS in windows11, when I use the docker desktop "CLI" button to enter the container, all I need to do is type "bash" and the conda environment "tensorflow_p36" is activated and I can run my code.
When I try docker exec in the following manner:
docker exec d4 bash && <path_to_sh_file>
I get an error that the file doesn't exists.
What is missing here? Thanks
Won't bash && <path_to_sh_file> enter a bash shell, successfully exit it, then try to run your sh file in a new shell? I think it would be better to put #! /usr/bin/bash as the top line of your sh file, and be sure the sh file has executable permissions chmod a+x <path_to_sh_file>

flask app not running automatically from Dockerfile

My simple flask app is not automatically starting when I run in docker, though I have added CMD command correctly. I am able to run flask using python3 /app/app.py manually from container shell. Hence, no issue with code or command
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y gcc libffi-dev libssl-dev
RUN apt-get install -y libxml2-dev xmlsec1
RUN apt-get install -y python3-pip python3-dev
RUN pip3 --no-cache-dir install --upgrade pip
RUN rm -rf /var/lib/apt/lists/*
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN pip3 install -r requirements.txt
EXPOSE 5000
CMD ["/usr/bin/python3", "/app/app.py"]
I run docker container as
docker run -it okta /bin/bash
When I log in to docker container and run "ps -eaf" on Ubuntu shell of container, I do not see flask process running. So my question is why below line did not work in Dockerfile?
CMD ["/usr/bin/python3", "/app/app.py"]
Running your docker container and passing the command /bin/bash is overriding the CMD ["/usr/bin/python3", "/app/app.py"] in your Dockerfile.
CMD vs ENTRYPOINT Explained Here
Try changing the last line of your Dockerfile to
ENTRYPOINT ["/usr/bin/python3", "/app/app.py"]
Don't forget to rebuild your image after changing.
Or... you can omit the /bin/bash from the end of your docker run command and see if your app.py starts up successfully.

Docker Container stops immediately (Flask/Python/Megatutorial)

I've been following the flask megatutorial by the inestimable Miguel Grinberg (https://learn.miguelgrinberg.com/read/mega-tutorial/ch19.html), and recently hit on a snag in deployment.
The docker run command starts the container and then it immediately stops. It isn't showing up in docker ps -a either. I've trawled through lots of responses here which seem to suggest that the solution is to add "-it" to the docker run command however this does not solve the issue.
Here's my dockerfile:
FROM python:3.6-alpine
RUN adduser -D james
WORKDIR /home/myflix
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn pymysql
COPY app app
COPY migrations migrations
COPY myflix.py config.py boot.sh ./
RUN chmod +x boot.sh
ENV FLASK_APP myflix.py
RUN chown -R james:james ./
USER james
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
My image is called myflix:secondattempt.
The command used to start the container:
sudo docker run --name myflixcont -d -p 8000:5000 --rm myflix:secondattempt
As I said, I've already tried dropping in various combinations of "-i" and "-t" in front of the "-d" to no avail.
-it means interactive tty.
You can not use -it in conjunction with -d which means detached.
Remove -d and add -it:
docker run --name myflixcont -it -p 8000:5000 --rm myflix:secondattempt
Another point (with the purpose of helping you) is that ENTRYPOINT runs in exec mode. meaning that it does not start a bash or dash itself. You should specify it manually and explicitly:
ENTRYPOINT ["sh", "file.sh"]
# or
ENTRYPOINT ["bash", "file.sh"]

Run python process in docker in linux screen in detach mode

I have written Dockerfile for my python application.
Requirement is :
Install & start mysql server.
Run the application in screen in detach mode.
Below is my Dockerfile:
FROM ubuntu:16.04
# Update OS
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip screen npm vim net-tools
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
RUN pip install --no-cache-dir -r requirements.txt
COPY src /usr/src/app/src
COPY ./src/nsd.ini /etc/
RUN pwd
RUN cd /usr/src/app
RUN service mysql start
RUN /bin/bash -c "chmod +x src/run_demo_app.sh && src/run_demo_app.sh"
Below is the content of bash script
$ cat src/run_demo_app.sh
$ screen -dm bash -c "sleep 10; python -m src.app";
The problem is Mysql doesn't start. I need to start it manually from container.
Also, the screen becomes dead and application do not start. Manually running the script works fine.
So this is a understanding gap and nothing else. Note below issues in your docker file
Never use service command
RUN service mysql start
Docker doesn't use a init system. So never use a service command inside docker.
Don't put everything in same container
You should not put everything in the same container. So mysql should run in its own container and python in its own
Use official images
You don't need to re-invent the wheel. Use official images as much as possible. You should be using mysql and python images in your case
Use docker-compose when multiple services are needed
In your case since you are requiring multiple services, use docker-compose.
No need to use screen in docker
Screen is used when your want your process to be running even if your SSH disconnects. So that in not needed in docker. If you run your docker run or docker-compose up command with an additional -d flag then your container will automatically be launched in background

Insert data after mysql started in a docker container

I am using docker to start mysql service in a container. After the container starts, I want to insert some data to database automatically via python scripts. This is my Dockerfile:
FROM mysql:5.7
EXPOSE 3306
ENV MYSQL_ROOT_PASSWORD 123456
WORKDIR /app
ADD . /app
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip
RUN pip3 install --user -r requirements.txt
RUN python3 init.py
The last row runs the script to add some data to database but this time mysql service has not started yet so it fails when running docker build. How do I accomplish this? Thanks in advance.
According to the docs, the MySQL entrypoint will automatically execute any files with .sh, .gz or .sql scripts found in /docker-entrypoint-initdb.d. So, create a script to execute your Python script for you. If you call this file 01-my-script.sh, your Dockerfile will look like this:
FROM mysql:5.7
EXPOSE 3306
ENV MYSQL_ROOT_PASSWORD 123456
WORKDIR /app
RUN apt-get update && apt-get install -y \
python3 \
python3-pip
# Copy requirements in first, and run them (so cache won't be invalidated)
COPY ./requirements.txt ./requirements.txt
RUN pip3 install --user -r requirements.txt
# Copy SQL Fixture
COPY ./01-my-script.sh /docker-entrypoint-initdb.d/01-my-script.sh
RUN chmod +x /docker-entrypoint-initdb.d/01-my-script.sh
# Copy the rest of your project
COPY . .
And your script will only contain:
#!/bin/sh
python3 /app/init.py
Now, when you bring up your container, your script will execute. Monitor the execution of the running container with docker logs -f <container_name> to make sure your script is running.

Categories

Resources