I have built a docker image using a Dockerfile that does the following:
FROM my-base-python-image
WORKDIR /opt/data/projects/project/
RUN mkdir files
RUN rm -rf /etc/yum.repos.d/*.repo
COPY rss-centos-7-config.repo /etc/yum.repos.d/
COPY files/ files/
RUN python -m venv /opt/venv && . /opt/venv/activate
RUN yum install -y unzip
WORKDIR files/
RUN unzip file.zip && rm -rf file.zip && . /opt/venv/bin/activate && python -m pip install *
WORKDIR /opt/data/projects/project/
That builds an image that allows me to run a custom command. In a terminal, for instance, here is the commmand I run after activating my project venv:
python -m pathA.ModuleB -a inputfile_a.json -b inputfile_b.json -c
Arguments a & b are custom tags to identify input files. -c calls a block of code.
So to run the built image successfully, I run the container and map local files to input files:
docker run --rm -it -v /local/inputfile_a.json:/opt/data/projects/project/inputfile_a.json -v /local/inputfile_b.json:/opt/data/projects/project/inputfile_b.json image-name:latest bash -c 'source /opt/venv/bin/activate && python -m pathA.ModuleB -a inputfile_a.json -b inputfile_b.json -c'
Besides shortening file paths, is there anythin I can do to shorten the docker run command? I'm thinking that adding a CMD and/or ENTRYPOINT to the Dockerfile would help, but I cannot figure out how to do it as I get errors.
There are a couple of things you can do to improve this.
The simplest is to run the application outside of Docker. You mention that you have a working Python virtual environment. A design goal of Docker is that programs in containers can't generally access files on the host, so if your application is all about reading and writing host files, Docker may not be a good fit.
Your file paths inside the container are fairly long, and this is bloating your -v mount options. You don't need an /opt/data/projects/project prefix; it's very typical just to use short paths like /app or /data.
You're also installing your application into a Python virtual environment, but inside a Docker image, which provides its own isolation. As you're seeing in your docker run command and elsewhere, the mechanics of activating a virtual environment in Docker are a little hairy. It's also not necessary; just skip the virtual environment setup altogether. (You can also directly run /opt/venv/bin/python and it knows it "belongs to" a virtual environment, without explicitly activating it.)
Finally, in your setup.py file, you can use a setuptools entry_points declaration to provide a script that runs your named module.
That can reduce your Dockerfile to more or less
FROM my-base-python-image
# OS-level setup
RUN rm -rf /etc/yum.repos.d/*.repo
COPY rss-centos-7-config.repo /etc/yum.repos.d/
RUN yum install -y unzip
# Copy the application in
WORKDIR /app/files
COPY files/ ./
RUN unzip file.zip \
&& rm file.zip \
&& pip install *
# Typical runtime metadata
WORKDIR /app
CMD main-script --help
And then when you run it, you can:
docker run --rm -it \
-v /local:/data \ # just map the entire directory
image-name:latest \
main-script -a /data/inputfile_a.json -b /data/inputfile_b.json -c
You can also consider the docker run -w /data option to change the current directory, which would add a Docker-level argument but slightly shorten the script command line.
Related
I'm attempting to run a Python file from a Docker container but receive the error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./models/PriceNotifications.py\": stat ./models/PriceNotifications.py: no such file or directory": unknown.
I build and run using commands:
docker build -t pythonstuff .
docker tag pythonstuff adr/test
docker run -t adr/test
Dockerfile:
FROM ubuntu:16.04
COPY . /models
# add the bash script
ADD install.sh /
# change rights for the script
RUN chmod u+x /install.sh
# run the bash script
RUN /install.sh
# prepend the new path
ENV PATH /root/miniconda3/bin:$PATH
CMD ["./models/PriceNotifications.py"]
install.sh:
apt-get update # updates the package index cache
apt-get upgrade -y # updates packages
# installs system tools
apt-get install -y bzip2 gcc git # system tools
apt-get install -y htop screen vim wget # system tools
apt-get upgrade -y bash # upgrades bash if necessary
apt-get clean # cleans up the package index cache
# INSTALL MINICONDA
# downloads Miniconda
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O Miniconda.sh
bash Miniconda.sh -b # installs it
rm -rf Miniconda.sh # removes the installer
export PATH="/root/miniconda3/bin:$PATH" # prepends the new path
# INSTALL PYTHON LIBRARIES
conda install -y pandas # installs pandas
conda install -y ipython # installs IPython shell
# CUSTOMIZATION
cd /root/
wget http://hilpisch.com/.vimrc # Vim configuration
I've tried modifying the CMD within the Dockerfile to :
CMD ["/models/PriceNotifications.py"]
but the same error occurs.
The file structure is as follows:
How should I modify the Dockerfile or dir structure so that models/PriceNotifications.py is found and executed ?
Thanks to earlier comments, using the path:
CMD ["/models/models/PriceNotifications.py"]
instead of
CMD ["./models/PriceNotifications.py"]
Allows the Python script to run.
I would have thought CMD ["python /models/models/PriceNotifications.py"] should be used instead of CMD ["/models/models/PriceNotifications.py"] to invoke the Python interpreter but the interpreter seems to be already available as part of the Docker run.
i have a dockerfile which looks like this:
FROM python:3.7-slim-stretch
ENV PIP pip
RUN \
$PIP install --upgrade pip && \
$PIP install scikit-learn && \
$PIP install scikit-image && \
$PIP install rasterio && \
$PIP install geopandas && \
$PIP install matplotlib
COPY sentools sentools
COPY data data
COPY vegetation.py .
Now in my project i have two python files vegetation and forest. i have kept each of them in separate folders. How can i create separate docker images for both python files and execute the containers for them separately?
If the base code is same, and only the container is supposed to run up with different Python Script, So then I will suggest using single Docker and you will not worry about the management of two docker image.
Set vegetation.py to default, when container is up without passing ENV it will run vegetation.py and if the ENV FILE_TO_RUN override during run time, the specified file will be run.
FROM python:3.7-alpine3.9
ENV FILE_TO_RUN="/vegetation.py"
COPY vegetation.py /vegetation.py
CMD ["sh", "-c", "python $FILE_TO_RUN"]
Now, if you want to run forest.py then you can just pass the path file to ENV.
docker run -it -e FILE_TO_RUN="/forest.py" --rm my_image
or
docker run -it -e FILE_TO_RUN="/anyfile_to_run.py" --rm my_image
updated:
You can manage with args+env in your docker image.
FROM python:3.7-alpine3.9
ARG APP="default_script.py"
ENV APP=$APP
COPY $APP /$APP
CMD ["sh", "-c", "python /$APP"]
Now build with ARGs
docker build --build-arg APP="vegetation.py" -t app_vegetation .
or
docker build --build-arg APP="forest.py" -t app_forest .
Now good to run
docker run --rm -it app_forest
copy both
FROM python:3.7-alpine3.9
# assign some default script name to args
ARG APP="default_script.py"
ENV APP=$APP
COPY vegetation.py /vegetation.py
COPY forest.py /forest.py
CMD ["sh", "-c", "python /$APP"]
If you insist in creating separate images, you can always use the ARG command.
FROM python:3.7-slim-stretch
ARG file_to_copy
ENV PIP pip
RUN \
$PIP install --upgrade pip && \
$PIP install scikit-learn && \
$PIP install scikit-image && \
$PIP install rasterio && \
$PIP install geopandas && \
$PIP install matplotlib
COPY sentools sentools
COPY data data
COPY $file_to_copy .
And the build the image like that:
docker build --buid-arg file_to_copy=vegetation.py .
or like that
docker build --buid-arg file_to_copy=forest.py .
When you start a Docker container, you can specify what command to run at the end of the docker run command. So you can build a single image that contains both scripts and pick which one runs when you start the container.
The scripts should be "normally" executable: they need to have the executable permission bit set, and they need to start with a line like
#!/usr/bin/env python3
and you should be able to locally (outside of Docker) run
. some_virtual_environment/bin/activate
./vegetation.py
Once you've gotten through this, you can copy the content into a Docker image
FROM python:3.7-slim-stretch
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY sentools sentools
COPY data data
COPY vegetation.py forest.py .
CMD ["./vegetation.py"]
Then you can build and run this image with either script.
docker build -t trees .
docker run --rm trees ./vegetation.py
docker run --rm trees ./forest.py
If you actually want this to be two separate images, you can create two separate Dockerfiles that differ only in their final COPY and CMD lines, and use the docker build -f option to pick which one to use.
$ tail -2 Dockerfile.vegetation
COPY vegetation.py ./
CMD ["./vegetation.py"]
$ docker build -t vegetation -f Dockerfile.vegetation .
$ docker run --rm vegetation
I have written Dockerfile for my python application.
Requirement is :
Install & start mysql server.
Run the application in screen in detach mode.
Below is my Dockerfile:
FROM ubuntu:16.04
# Update OS
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip screen npm vim net-tools
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
RUN pip install --no-cache-dir -r requirements.txt
COPY src /usr/src/app/src
COPY ./src/nsd.ini /etc/
RUN pwd
RUN cd /usr/src/app
RUN service mysql start
RUN /bin/bash -c "chmod +x src/run_demo_app.sh && src/run_demo_app.sh"
Below is the content of bash script
$ cat src/run_demo_app.sh
$ screen -dm bash -c "sleep 10; python -m src.app";
The problem is Mysql doesn't start. I need to start it manually from container.
Also, the screen becomes dead and application do not start. Manually running the script works fine.
So this is a understanding gap and nothing else. Note below issues in your docker file
Never use service command
RUN service mysql start
Docker doesn't use a init system. So never use a service command inside docker.
Don't put everything in same container
You should not put everything in the same container. So mysql should run in its own container and python in its own
Use official images
You don't need to re-invent the wheel. Use official images as much as possible. You should be using mysql and python images in your case
Use docker-compose when multiple services are needed
In your case since you are requiring multiple services, use docker-compose.
No need to use screen in docker
Screen is used when your want your process to be running even if your SSH disconnects. So that in not needed in docker. If you run your docker run or docker-compose up command with an additional -d flag then your container will automatically be launched in background
How can I develop outside a docker container but still have nosetests watch my file changes and rerun my unit tests inside the container?
Here's my Dockerfile
FROM ubuntu
# Install Python.
RUN \
apt-get update && \
apt-get install -y python python-dev python-pip python-virtualenv && \
rm -rf /var/lib/apt/lists/* && \
pip install nose nose-watch mock && \
locale-gen en_US.UTF-8
# Define working directory.
WORKDIR /data/test/src
# Define default command.
CMD ["bash"]
Here are my commands:
docker build -t="test” .
docker run -it -v ~/test/src:/data/test/src test
When I run nosetests --with-watch inside the container, everything works. However, if I make file changes outside the container (I want to develop outside the container), then nosetests won't detect the changes and rerun the tests. I thought a volume was supposed to share files from host to container...
Some times i need to use modules which are not a part of default python installation and some times even packages like Anaconda or Canopy does not include them. So every time I move my project to another machine or just reinstall python i need to download them again. So my question is. Is there a way to store necessory modules in the project folder and use them from it without moving to default python installation folder.
You can use virtual environment or docker to install the required modules in your project dir so it is isolated from your system Python installation. In fact, you don't need Python installed on your machine when using docker.
Here is my workflow when developing Django web app with Docker. If your project dir is in /Projects/sampleapp, change the current working directory to the project dir and run the following.
Run a docker container from your terminal:
docker run \
-it --rm \
--name django_app \
-v ${PWD}:/app \
-w /app \
-e PYTHONUSERBASE=/app/.vendors \
-p 8000:8000 \
python:3.5 \
bash -c "export PATH=\$PATH:/app/.vendors/bin && bash"
# Command expalanation:
#
# docker run Run a docker container
# -it Set interactive and allocate a pseudo-TTY
# -rm Remove the container on exit
# --name django_app Set the container name
# -v ${PWD}:/app Mount current dir as /app in the container
# -w /app Set the current working directory to /app
# -e PYTHONUSERBASE=/app/.vendors pip will install packages to /app/.vendors
# -p 8000:8000 Open port 8000
# python:3.5 Use the Python:3.5 docker image
# bash -c "..." Add /app/.vendors/bin to PATH and open the shell
On the container's shell, install the required packages:
pip install django celery django-allauth --user
pip freeze > requirements.txt
The --user options along with the PYTHONUSERBASE environment variable will make pip installs the packages in /app/.vendors.
Create the django project and develop the app as usual:
django-admin startproject sampleapp
cd sampleapp
python manage.py runserver 0.0.0.0:8000
The directory structure will look like this:
Projects/
sampleapp/
requirements.txt
.vendors/ # Note: don't add this dir to your VCS
sampleapp/
manage.py
...
This configuration enables you to install the packages in your project dir, isolated from your system. Note that you need to add requirements.txt to your VCS, but remember to exclude .vendors/ dir.
When you need to move and run the project on another machine, run the docker command above and reinstall the required packages on the container's shell:
pip install -r requirements.txt --user