Unable to mount host directory to docker container - python

I have a python app that runs every day to download images and saves them into specified folders with each day folder creation like this /home/ubuntu/images/yyyymmdd.
I have built a docker container of my python app on ubuntu 20. When I try to run the app by mounting the host directory then log message prints folder created /home/ubuntu/images/20220123 but I can not see any folder.
I have checked the docker folder /var/lib/docker and found that random hash is created inside folder containers and overlay2. So I have tried to mount with both directories as below but no luck.
sudo docker run -t -i -v /home/ubuntu/images:/var/lib/docker/containers --network=host testapp/img-downloader:0.0.1
sudo docker run -t -i -v /home/ubuntu/images:/var/lib/docker/overlay2 --network=host testapp/img-downloader:0.0.1
I can see the data folder created inside the images folder and image files got saved like this
/var/lib/docker/overlay2/d52bcf61cae2e563c3c8561bab53b4bb2dd2ea2d633a14d40c96d7992fffae28/diff/home/ubuntu/images/20220123
What I am missing here so that it's not saving images to host directory like /home/ubuntu/images/20220123 instead of the inside docker container.
My Dockerfile is as below -
FROM alpine:3.14
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3-dev mariadb-dev gcc musl-dev g++ && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools
COPY ./requirements.txt /requirements.txt
WORKDIR /
RUN pip3 install -r requirements.txt
COPY ./ /
ENTRYPOINT [ "python3" ]
CMD [ "./main.py" ]
Please help here. thanks

...What i am missing here so that its not saving images to host directory like /home/ubuntu/images/20220123 instead of inside docker container.
Presumed you meant you want to save images to a directory on the host and not inside the container. There's no need to mount /var/lib/docker/.... You need to ensure your program saved files to a path that is bind mounted to the host. Examples:
mkdir images # Create a directory on the host to hold the images
docker run -it --rm -v ~/images:/images alpine ash -c "mkdir /images/yesterday; mkdir /images/today; echo 'hello' > /images/today/msg.txt; echo 'done.'"
After the container exited, issue ls -ld images/* on the host will show you the 2 directories created; cat images/today/msg.txt will print you the content you saved via the container (simulate if you download images via the container).

Related

No such file or directory error when running Docker container

I have a REST Api for a Flask app with an Oracle database for which I use Oracle Instant Client.
I managed to run the app from my computer and it works fine and my task is to make a Docker file for this app. I don`t have much experience with Docker.
This is the Dockerfile that I have written
FROM python:3.7.5-slim-buster
# Installing Oracle instant client
WORKDIR /opt/oracle
RUN apt-get update && apt-get install -y libaio1 wget unzip \
&& wget
https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-
linuxx64.zip \
&& unzip instantclient-basiclite-linuxx64.zip \
&& rm -f instantclient-basiclite-linuxx64.zip \
&& cd /opt/oracle/instantclient* \
&& rm -f *jdbc* *occi* *mysql* *README *jar uidrvci genezi adrci \
&& echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-
instantclient.conf \
&& ldconfig
WORKDIR /app
COPY . .
EXPOSE 5000
CMD ["python", "/app/__init__.py"]
I use the following commands:
docker build - < Dockerfile
And the Docker image build with no errors
docker run -d -p 5000:5000 (docker image id)
docker start -ai (docker container id)
And I get this error: python: can't open file '/app/__init__.py': [Errno 2] No such file or directory
The folder structure of the app on my computer is the following:
C:\Proiecte_python\Flask_Docker_App-Start\app
and in app are the instant oracle client the python file and the Dockerfile.
Can please someone help me because I think it`s something wrong in the Dockerfile CMD path or something like that. I have tried many variants but none work
The last line of your Dockerfile
CMD ["python", "/app/__init__.py"]
is equivalent to executing
python /app/__init__.py
The error you are getting is that the file __init__.py does not exist.
The lines
WORKDIR /app
COPY . .
Are telling your container to CD into the /app directory then copy all files from your host machine (Eg your physical machine) into the /app directory of the container. (the COPY . . means to copy from the current directory of your host - eg the location you're running docker commands from - into the current directory of the container - /app).
It seems that as part of receiving the Dockerfile you should have also downloaded the __init.py__ file and then the Dockerfile would have copied that into your container for you.
Alternatively you may have missed steps in your instructions where you were meant to write your own __init.py__ file for testing.
Either way your solutions are to find the __init.py__ file and put it into your current working directory ( C:\Proiecte_python\Flask_Docker_App-Start\app ) and ensure that you run your docker build and docker run commands from that same directory eg -
cd C:\Proiecte_python\Flask_Docker_App-Start\app
docker build <....>
docker run <....>
docker start <....>
Or your other solution is to go back to the instructions and ensure that you have created the python file and put it in the correct place.
As a very basic Flask/Docker tutorial see the below link
https://runnable.com/docker/python/dockerize-your-flask-application

A more elegant docker run command

I have built a docker image using a Dockerfile that does the following:
FROM my-base-python-image
WORKDIR /opt/data/projects/project/
RUN mkdir files
RUN rm -rf /etc/yum.repos.d/*.repo
COPY rss-centos-7-config.repo /etc/yum.repos.d/
COPY files/ files/
RUN python -m venv /opt/venv && . /opt/venv/activate
RUN yum install -y unzip
WORKDIR files/
RUN unzip file.zip && rm -rf file.zip && . /opt/venv/bin/activate && python -m pip install *
WORKDIR /opt/data/projects/project/
That builds an image that allows me to run a custom command. In a terminal, for instance, here is the commmand I run after activating my project venv:
python -m pathA.ModuleB -a inputfile_a.json -b inputfile_b.json -c
Arguments a & b are custom tags to identify input files. -c calls a block of code.
So to run the built image successfully, I run the container and map local files to input files:
docker run --rm -it -v /local/inputfile_a.json:/opt/data/projects/project/inputfile_a.json -v /local/inputfile_b.json:/opt/data/projects/project/inputfile_b.json image-name:latest bash -c 'source /opt/venv/bin/activate && python -m pathA.ModuleB -a inputfile_a.json -b inputfile_b.json -c'
Besides shortening file paths, is there anythin I can do to shorten the docker run command? I'm thinking that adding a CMD and/or ENTRYPOINT to the Dockerfile would help, but I cannot figure out how to do it as I get errors.
There are a couple of things you can do to improve this.
The simplest is to run the application outside of Docker. You mention that you have a working Python virtual environment. A design goal of Docker is that programs in containers can't generally access files on the host, so if your application is all about reading and writing host files, Docker may not be a good fit.
Your file paths inside the container are fairly long, and this is bloating your -v mount options. You don't need an /opt/data/projects/project prefix; it's very typical just to use short paths like /app or /data.
You're also installing your application into a Python virtual environment, but inside a Docker image, which provides its own isolation. As you're seeing in your docker run command and elsewhere, the mechanics of activating a virtual environment in Docker are a little hairy. It's also not necessary; just skip the virtual environment setup altogether. (You can also directly run /opt/venv/bin/python and it knows it "belongs to" a virtual environment, without explicitly activating it.)
Finally, in your setup.py file, you can use a setuptools entry_points declaration to provide a script that runs your named module.
That can reduce your Dockerfile to more or less
FROM my-base-python-image
# OS-level setup
RUN rm -rf /etc/yum.repos.d/*.repo
COPY rss-centos-7-config.repo /etc/yum.repos.d/
RUN yum install -y unzip
# Copy the application in
WORKDIR /app/files
COPY files/ ./
RUN unzip file.zip \
&& rm file.zip \
&& pip install *
# Typical runtime metadata
WORKDIR /app
CMD main-script --help
And then when you run it, you can:
docker run --rm -it \
-v /local:/data \ # just map the entire directory
image-name:latest \
main-script -a /data/inputfile_a.json -b /data/inputfile_b.json -c
You can also consider the docker run -w /data option to change the current directory, which would add a Docker-level argument but slightly shorten the script command line.

Docker run application in a volume

In my django docker app i would to use a volume for manage my application file here is my Dockerfile:
FROM python:3.6-alpine
EXPOSE 8000
RUN apk update
RUN apk add --no-cache make linux-headers libffi-dev jpeg-dev zlib-dev
RUN apk add postgresql-dev gcc python3-dev musl-dev
VOLUME /var/lib/cathstudio/data
WORKDIR /var/lib/cathstudio/data
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install -t /var/lib/cathstudio/data -r requirements.txt
ENV PYTHONUNBUFFERED 1
ENV PYTHONPATH /var/lib/cathstudio/data
COPY . /var/lib/cathstudio/data
ENTRYPOINT python /var/lib/cathstudio/data/manage.py runserver 0.0.0.0:8000
but when i run my app:
docker run -d -it --rm --link postgres:postgres --name=cathstudio myrepo/app_studio:latest
i get
python: can't open file '/var/lib/cathstudio/data/manage.py': [Errno 2] No such file or directory
the same also if i in my Dockerfile write just ENTRYPOINT python manage.py runserver 0.0.0.0:8000
where is my file wrong? how can i run my app using a volume for storing app files?
So many thanks in advance
Can you try to use the -v flag rather than using VOLUME in dockerfile , im not sure why but VOLUME creates the exact empty volume rather than mounting along with all of the data :
remove VOLUME section from docker file and try the following
docker run -d -it --rm -v /var/lib/cathstudio/data/:/var/lib/cathstudio/data --link postgres:postgres --name=cathstudio myrepo/app_studio:latest
Volumes are not intended to hold code. Especially given your comment to #i4nk1t's answer ("I want to distribute my Docker image"), the code should be in the image itself, not in a volume.
You should just delete the VOLUME line.
Declaring VOLUME in a Dockerfile has some key side effects. One of them is that it prevents any RUN command on that directory from having any future effect, which is keeping your application from working.
I tend to recommend a workflow where you make sure your application works locally and then package it in Docker. If you can't use your host Python and really have to do live development in the container environment, you can use the docker run -v option (or equivalent) to mount host content on to any directory in any container, regardless of whether or not it was declared as a VOLUME.

Store modules in project folder

Some times i need to use modules which are not a part of default python installation and some times even packages like Anaconda or Canopy does not include them. So every time I move my project to another machine or just reinstall python i need to download them again. So my question is. Is there a way to store necessory modules in the project folder and use them from it without moving to default python installation folder.
You can use virtual environment or docker to install the required modules in your project dir so it is isolated from your system Python installation. In fact, you don't need Python installed on your machine when using docker.
Here is my workflow when developing Django web app with Docker. If your project dir is in /Projects/sampleapp, change the current working directory to the project dir and run the following.
Run a docker container from your terminal:
docker run \
-it --rm \
--name django_app \
-v ${PWD}:/app \
-w /app \
-e PYTHONUSERBASE=/app/.vendors \
-p 8000:8000 \
python:3.5 \
bash -c "export PATH=\$PATH:/app/.vendors/bin && bash"
# Command expalanation:
#
# docker run Run a docker container
# -it Set interactive and allocate a pseudo-TTY
# -rm Remove the container on exit
# --name django_app Set the container name
# -v ${PWD}:/app Mount current dir as /app in the container
# -w /app Set the current working directory to /app
# -e PYTHONUSERBASE=/app/.vendors pip will install packages to /app/.vendors
# -p 8000:8000 Open port 8000
# python:3.5 Use the Python:3.5 docker image
# bash -c "..." Add /app/.vendors/bin to PATH and open the shell
On the container's shell, install the required packages:
pip install django celery django-allauth --user
pip freeze > requirements.txt
The --user options along with the PYTHONUSERBASE environment variable will make pip installs the packages in /app/.vendors.
Create the django project and develop the app as usual:
django-admin startproject sampleapp
cd sampleapp
python manage.py runserver 0.0.0.0:8000
The directory structure will look like this:
Projects/
sampleapp/
requirements.txt
.vendors/ # Note: don't add this dir to your VCS
sampleapp/
manage.py
...
This configuration enables you to install the packages in your project dir, isolated from your system. Note that you need to add requirements.txt to your VCS, but remember to exclude .vendors/ dir.
When you need to move and run the project on another machine, run the docker command above and reinstall the required packages on the container's shell:
pip install -r requirements.txt --user

Docker. No such file or directory

I have some files which I want to move them to a docker container.
But at the end docker can't find a file..
The folder with the files on local machine are at /home/katalonne/flask4
File Structure if it matters:
The Dockerfile:
#
# First Flask App Dockerfile
#
#
# Pull base image.
FROM centos:7.0.1406
# Build commands
RUN yum install -y python-setuptools mysql-connector mysql-devel gcc python-devel
RUN easy_install pip
RUN mkdir /opt/flask4
WORKDIR /opt/flask4
ADD requirements.txt /opt/flask4
RUN pip install -r requirements.txt
ADD . /opt/flask4
# Define deafult command.
CMD ["python","hello.py"]
# Expose ports.
EXPOSE 5000
So I built the image with this command :
docker build -t flask4 .
I ran the container with volume by :
docker run -d -p 5000:5000 -v /home/Katalonne/flask4:/opt/flask4 --name web flask4
And when I want to run the file on the container :
docker logs -f web
I get this error that it can not find my hello.py file :
python: can't open file 'hello.py': [Errno 2] No such file or directory
What is my fault?
P.S. : I'm a Docker and Linux partially-noob.
The files and directories that are located in the same location as your Dockerfile are indeed available (temporarily) to your docker build. But, after the docker build, unless you have used ADD or COPY to move those files permanently to the docker container, they will not be available to your docker container after the build is done. This file context is for the build, but you want to move them to the container.
You can add the following command:
...
ADD . /opt/flask4
ADD . .
# Define deafult command.
CMD ["python","hello.py"]
The line ADD . . should copy over all the things in your temporary build context to the container. The location that these files will go to is where your WORKDIR is pointing to (/opt/flask4).
If you only wanted to add hello.py to your container, then use
ADD hello.py hello.py
So, when you run CMD ["python","hello.py"], the pwd that you will be in is /opt/flask4, and hello.py should be in there, and running the command python hello.py in that directory should work.
HTH.

Categories

Resources