Docker - No such file or directory when running the image - python

FROM python:3.10
COPY requirements.txt .
RUN pip install -r requirements.txt
#Make a copy of the current directory
COPY / ./
#Display list of files in directory
RUN ls /
ENTRYPOINT ["python", "/main.py"]
So this is my current dockerfile and the list displays as this when I build.
Directory List
This is the code that is giving me the issue
d1 = open(r"backend_resources\results\orlando_averaged_2019-01-01.geojson")
And throwing me this error when I run the image
FileNotFoundError: [Errno 2] No such file or directory: 'backend_resources\\results\\orlando_averaged_2019-01-01.geojson'
However you will notice that in the image with the list, backend_resources and the other files within it do exist, and to my knowledge are in the correct directories for this code to run properly, still new to Docker so I definitely could be doing something wrong

I think problem is chosen path style. You use path in windows style.
If you image based on unix system (Debian, Alpine, etc.), use path in unix style.
d1 = open(r"/backend_resources/results/orlando_averaged_2019-01-01.geojson")

Related

Dockerfile WORKDIR distracts running program from layer?

I made the Dockerfile for making Docker image that runnable from AWS Batch, contains multiple layers, copy files to '/opt', which I set it as WORKDIR.
I have to run a program called 'BLAST', which is a single .exe program, requires several parameters including the location of DB.
When I run the image, the error comes out with it cannot find the mounted DB location. Full error message is b'BLAST Database error: No alias or index file found for nucleotide database [/mnt/fsx/ntdb/nt] in search path [/opt:/fsx/ntdb:]\n'] where /mnt/fsx/ntdb/nt is the DB path.
The only assumption is because I gave WORKDIR in my Dockerfile so the default workspace is set as '/opt:'.
I wonder how should I fix this issue. By removing WORKDIR ? or something else?
My Dockerfile looks like below
# Set Work dir
ARG FUNCTION_DIR="/opt"
# Get layers
FROM (aws-account).dkr.ecr.(aws-region).amazonaws.com/uclust AS layer_1
FROM (aws-account).dkr.ecr.(aws-region).amazonaws.com/blast AS layer_2
FROM public.ecr.aws/lambda/python:3.9
# Copy arg and set work dir
ARG FUNCTION_DIR
COPY . ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
# Copy from layers
COPY --from=layer_1 /opt/ .
RUN true
COPY --from=layer_2 /opt/ .
RUN true
COPY . ${FUNCTION_DIR}/
RUN true
# Copy and Install required libraries
COPY requirements.txt .
RUN true
RUN pip3 install -r requirements.txt
# To run lambda handler
RUN pip install \
--target "${FUNCTION_DIR}" \
awslambdaric
# To run blast
RUN yum -y install libgomp
# See files inside image
RUN dir -s
# Get permissions for files
RUN chmod +x /opt/main.py
RUN chmod +x /opt/mode/submit/main.py
# Set Entrypoint and CMD
ENTRYPOINT [ "python3" ]
CMD [ "-m", "awslambdaric", "main.lambda_handler" ]
Edit: Further info I found, When looking at the error, the BLAST program trying to search db at the path /opt:/fsx/ntdb:, which is the combination of path set as WORKDIR in Dockerfile and BLASTDB path set by os.environ.['BLASTDB'] (os.environ['BLASTDB'] description.).
Figured out the problem after many debug trials. So the problem was neither WORKDIR nor os.environ.['BLASTDB']. The paths were correctly defined, and the BLAST program searching [/opt:/fsx/ntdb:] was correct way according to what is says in here
Current working directory (*)
User's HOME directory (*)
Directory specified by the NCBI environment variable
The standard system directory (“/etc” on Unix-like systems, and given by the environment variable SYSTEMROOT on Windows)
The actual solution was checking whether file system is correctly mounted or not and the permission of the files inside the file system. Initially I thought file system was mounted correctly since I already tested from other Batch submit job many times, but only the mount folder is created, files were not exist. Therefore, even though the program tried to find the index file, it could not find any so the error came out.

python: can't open file 'main.py': [Errno 2] No such file or directory - docker

I'm seeing the above error when I try to run my docker image. Below are the screenshots of my docker file and the directory structure.
As you have already specified the WORKDIR in the Dockerfile. Dont' copy your files to /.
Change your command to
COPY . . # If you want to copy whole folder into container
and as well CMD command to
CMD ["python", "src/main.py"]

Reading an HDF file outside a Docker container from a Python script inside a container

I have a Python script, python_script.py, that reads an HDF5 file, hdf_file.h5, on my local machine. The directory path to the files is
folder1
folder2
python_script.py
hdf_file.h5
I have the following sample code:
from pandas import read_hdf
df = read_hdf('hdf_file.h5')
When I run this code on my local machine, it works fine.
However, I need to place the Python script inside a Docker container, keep the HDF file out of the container, and have the code read the file. I want to have something like the following directory path for the container:
folder1
folder2
hdf_file.h5
docker-folder
python_script.py
requirements.txt
Dockerfile
I use the following Dockerfile:
FROM python:3
WORKDIR /project
COPY ./requirements.txt /project/requirements.txt
RUN pip install -r requirements.txt
COPY . /project
CMD [ "python", "python_script.py" ]
I am new to Docker and am having a lot of trouble figuring out how to get a Python script inside a container to read a file outside the container. What commands do I use or code changes do I make to be able to do this?
It seems you need to use docker volumes (https://docs.docker.com/storage/volumes/).
Try the following:
docker run -v path/where/lives/hdf5/:path/to/your/project/folder/your_image your_docker_image:your_tag
Where the first part before the : refers to host machine and after, the container.
Hope it helps!

Using docker entrypoint and files outside docker

I'm writing a python script which to create AWS CloudFormation change-sets to simplify the execution I want to add this script to a Docker image and run it as the entrypoint.
To achieve that, I have to read the CFs template file and parameters file, both are in json format.
When I execute the script on the local shell environment everything works as expected.
When I run the docker container and specify the files. The script says that it could not find the file.
Now my question is how can I enable the container getting access to this files?
docker pull cf-create-change-set:latest
docker run cf-create-change-set:latest --template-body ./template.json --cli-input-json ./parameters.json
Traceback (most recent call last):
File "/app/cf-create-change-set.py", line 266, in <module>
with open(CLI_INPUT_JSON, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: './template.json'
Here is my dockerfile:
FROM AWS_ACCOUNT_ID.dkr.ecr.AWS_REGION.amazonaws.com/cicd/docker-base-image:iat
WORKDIR /app
# Copy app data
COPY app/requirements.txt .
COPY app/cf-create-change-set.py .
RUN pip3 install --no-cache-dir -r /app/requirements.txt
ENTRYPOINT [ "/app/cf-create-change-set.py" ]
The reason for the error is that the file does not exist inside the container while it exists in your filesystem. There are at least two possible solutions.
You can either ADD the files into the container at build stage (build your own Dockerfile) or map a local directory to a directory inside the container:
docker run -v ./templates:/templates image:tag --template-body /templates/template.json
This way when you run the container it will have the same file located at /templates/template.json as the contents of your local templates folder. Read more about bind mounts.

Jenkins doesn't include refrenced files when building conda package

I am building a small conda package with Jenkins (linux) that should just:
Download a .zip from an external refrence holding font files
Extract the .zip
Copy the font files to a specific folder
Build the package
The build runs successful, but the package does not include the font files, but is basically empty. My build.sh has:
mkdir $PREFIX\root\share\fonts
cp *.* $PREFIX\root\share\fonts
My meta.yaml source has:
source:
url: <ftp server url>/next-fonts.zip
fn: next-fonts.zip
In Jenkins I do:
mkdir build
conda build fonts
The console output is strange though at this part:
+ mkdir /var/lib/jenkins/conda-bld/fonts_1478708638575/_b_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_prootsharefonts
+ cp Lato-Black.ttf Lato-BlackItalic.ttf Lato-Bold.ttf Lato-BoldItalic.ttf Lato-Hairline.ttf Lato-HairlineItalic.ttf Lato-Italic.ttf Lato-Light.ttf Lato-LightItalic.ttf Lato-Regular.ttf MyriadPro-Black.otf MyriadPro-Bold.otf MyriadPro-Light.otf MyriadPro-Regular.otf MyriadPro-Semibold.otf conda_build.sh /var/lib/jenkins/conda-bld/fonts_1478708638575/_b_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_prootsharefonts
BUILD START: fonts-1-1
Source cache directory is: /var/lib/jenkins/conda-bld/src_cache
Found source in cache: next-fonts.zip
Extracting download
Package: fonts-1-1
source tree in: /var/lib/jenkins/conda-bld/fonts_1478708638575/work/Fonts
number of files: 0
To me it seems the cp either doesn't complete or it copies to a wrong directory. Unfortunately, with the placeholder stuff I really can't decypher where exactly the fonts land when they are copied, all I know is that in /work/Fonts, there are no files and thus nothing is included in the package. While typing, I also noted that /work/Fonts actually has the Fonts starting with a capital F, while nowhere in the configuration or the scripts there is any instance of fonts starting with a capital F.
Any insight on what might go wrong?
mkdir $PREFIX\root\share\fonts
cp *.* $PREFIX\root\share\fonts
should be replaced with
mkdir $PREFIX/root/share/fonts
cp * $PREFIX/root/share/fonts
The buildscript was taken from another package that was built in windows and in changing the build script I forgot to change the folder separators.
Additionally creating subfolder structures isn't possible in linux while it is in windows. So this
mkdir $PREFIX/root/
mkdir $PREFIX/root/share/
mkdir $PREFIX/root/share/fonts/
cp * $PREFIX/root/share/fonts/
Was the ultimate solution to the problem.

Categories

Resources