Run file in conda inside docker - python

I have a python code that I attempt to wrap in a docker:
FROM continuumio/miniconda3
# Python 3.9.7 , Debian (use apt-get)
ENV TARGET=dev
RUN apt-get update
RUN apt-get install -y gcc
RUN apt-get install dos2unix
RUN apt-get install -y awscli
RUN conda install -y -c anaconda python=3.7
WORKDIR /app
COPY . .
RUN conda env create -f conda_env.yml
RUN echo "conda activate tensorflow_p36" >> ~/.bashrc
RUN pip install -r prod_requirements.txt
RUN pip install -r ./architectures/mask_rcnn/requirements.txt
RUN chmod +x aws_pipeline/set_env_vars.sh
RUN chmod +x aws_pipeline/start_gpu_aws.sh
RUN dos2unix aws_pipeline/set_env_vars.sh
RUN dos2unix aws_pipeline/start_gpu_aws.sh
RUN aws_pipeline/set_env_vars.sh $TARGET
Building the image works fine, running the image using the following commands works fine:
docker run --rm --name d4 -dit pd_v2 sh
My OS in windows11, when I use the docker desktop "CLI" button to enter the container, all I need to do is type "bash" and the conda environment "tensorflow_p36" is activated and I can run my code.
When I try docker exec in the following manner:
docker exec d4 bash && <path_to_sh_file>
I get an error that the file doesn't exists.
What is missing here? Thanks

Won't bash && <path_to_sh_file> enter a bash shell, successfully exit it, then try to run your sh file in a new shell? I think it would be better to put #! /usr/bin/bash as the top line of your sh file, and be sure the sh file has executable permissions chmod a+x <path_to_sh_file>

Related

Docker 'run' not finding file

I'm attempting to run a Python file from a Docker container but receive the error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./models/PriceNotifications.py\": stat ./models/PriceNotifications.py: no such file or directory": unknown.
I build and run using commands:
docker build -t pythonstuff .
docker tag pythonstuff adr/test
docker run -t adr/test
Dockerfile:
FROM ubuntu:16.04
COPY . /models
# add the bash script
ADD install.sh /
# change rights for the script
RUN chmod u+x /install.sh
# run the bash script
RUN /install.sh
# prepend the new path
ENV PATH /root/miniconda3/bin:$PATH
CMD ["./models/PriceNotifications.py"]
install.sh:
apt-get update # updates the package index cache
apt-get upgrade -y # updates packages
# installs system tools
apt-get install -y bzip2 gcc git # system tools
apt-get install -y htop screen vim wget # system tools
apt-get upgrade -y bash # upgrades bash if necessary
apt-get clean # cleans up the package index cache
# INSTALL MINICONDA
# downloads Miniconda
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O Miniconda.sh
bash Miniconda.sh -b # installs it
rm -rf Miniconda.sh # removes the installer
export PATH="/root/miniconda3/bin:$PATH" # prepends the new path
# INSTALL PYTHON LIBRARIES
conda install -y pandas # installs pandas
conda install -y ipython # installs IPython shell
# CUSTOMIZATION
cd /root/
wget http://hilpisch.com/.vimrc # Vim configuration
I've tried modifying the CMD within the Dockerfile to :
CMD ["/models/PriceNotifications.py"]
but the same error occurs.
The file structure is as follows:
How should I modify the Dockerfile or dir structure so that models/PriceNotifications.py is found and executed ?
Thanks to earlier comments, using the path:
CMD ["/models/models/PriceNotifications.py"]
instead of
CMD ["./models/PriceNotifications.py"]
Allows the Python script to run.
I would have thought CMD ["python /models/models/PriceNotifications.py"] should be used instead of CMD ["/models/models/PriceNotifications.py"] to invoke the Python interpreter but the interpreter seems to be already available as part of the Docker run.

How can I run a Docker command after building?

I have a Dockerfile:
FROM ubuntu:18.04
RUN apt-get -y update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update -y
RUN apt-get install -y python3.7 build-essential python3-pip
RUN pip3 install --upgrade pip
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
ENV FLASK_APP application.py
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT python3 -m flask run --host=0.0.0.0
But I want to also run python3 download.py before running the ENTRYPOINT. If I put it in here, and then build, then it executes here. I need it to execute only on ElasticBeanstalk.
How would I do that?
There's a pattern of using the Docker ENTRYPOINT to do first-time setup, and then launching the CMD. For example, you could write an entrypoint script like
#!/bin/sh
# Do the first-time setup
python3 download.py
# Run the CMD
exec "$#"
Since this is a shell script, you can include whatever logic or additional setup you need here.
In your Dockerfile, you need to change your ENTRYPOINT line to CMD, COPY in this script, and set it as the image's ENTRYPOINT.
...
COPY . /app
...
# If the script isn't already executable on the host
RUN chmod +x entrypoint.sh
# Must use JSON-array syntax
ENTRYPOINT ["/app/entrypoint.sh"]
# The same command as originally
CMD python3 -m flask run --host=0.0.0.0
If you want to debug this, since this setup honors the "command" part, you can run a one-off container that launches an interactive shell instead of the Flask process. This will still do the first-time setup, but then run the command from the docker run command instead of what was in the CMD line.
docker run --rm -it myimage bash
You can control whether you run the python3 download.py using environment variables. And then running locally you do docker run -e....

Docker python output csv file

I have a script python which should output a file csv. I'm trying to have this file in the current working directory but without success.
This is my Dockerfile
FROM python:3.6.4
RUN apt-get update && apt-get install -y libaio1 wget unzip
WORKDIR /opt/oracle
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-
basiclite-linuxx64.zip && \ unzip instantclient-basiclite-linuxx64.zip && rm
-f instantclient-basiclite-linuxx64.zip && \ cd /opt/oracle/instantclient*
&& rm -f jdbc occi mysql *README jar uidrvci genezi adrci && \ echo
/opt/oracle/instantclient > /etc/ld.so.conf.d/oracle-instantclient.conf &&
ldconfig
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install pystan
RUN apt-get -y update && python3 -m pip install cx_Oracle --upgrade
RUN pip install -r requirements.txt
CMD [ "python", "Main.py" ]
And run the container with the following command
docker container run -v $pwd:/home/learn/rstudio_script/output image
This is bad practice to bind a volume just to have 1 file on your container be saved onto your host.
Instead, what you should leverage is the copy command:
docker cp <containerId>:/file/path/within/container /host/path/target
You can set this command to auto execute with bash, after your docker run.
So something like:
#!/bin/bash
# this stores the container id
CONTAINER_ID=$(docker run -dit img)
docker cp $CONTAINER_ID:/some_path host_path
If you are adamant on using a bind volume, then as the others have pointed out, the issue is most likely your python script isn't outputting the csv to the correct path.
Your script Main.py is probably not trying to write to /home/learn/rstudio_script/output. The working directory in the container is /app because of the last WORKDIR directive in the Dockerfile. You can override that at runtime with --workdir but then the CMD would have to be changed as well.
One solution is to have your script write files to /output/ and then run it like this:
docker container run -v $PWD:/output/ image

How to run a bash script through pycharm?

I have the following set of bash commands
docker pull mcr.microsoft.com/mssql/server:2017-latest
docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=my_password' \
--name 'sql1' -p 1401:1433 \
-v "my_space":/opt/project \
-d mcr.microsoft.com/mssql/server:2017-latest
winpty docker exec -it sql1 bash
mkdir -p /var/opt/mssql/backup
cp my_db /var/opt/mssql/backup/
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "my_password" -i /opt/project/scripts/database_import/sql_script.sql
apt-get update -y
apt-get install python3-pip -y
python3 -m pip install pymssql
python3 -m pip install pandas==0.19.2
python3 -m pip install time
python3 -m pip install sqlalchemy
python3 -m pip install sqlalchemy_utils
cd /opt/project/
python3 scripts/database_import/import_database.py
What this set of commands is essentially doing is that it pulls the mssql server, restores a database, installs some python packages and it runs a python script inside the mssql docker container.
Is there a way to run this bash script, from pycharm ?
Sure thing. If you are using the latest version, then Shell Script should be available https://www.jetbrains.com/help/idea/shell-scripts.html
So, for example, I have test.sh file:
I can just click this green run button and PyCharm will run it, or create Run Configuration for it (see the link above).

HOW to fix 'PermissionError: [Errno 13] Permission denied' in python-crontab in docker image?

I create a docker image in order to set a python code with schedule,so I use python-crontab module, how can i solve permission denied problem?
Ubuntu 16.04.6 LTS
python 3.5.2
I create sche.py and it can trigger weather.py,
it is success in local,but it can't package to docker image
```
#dockerfile
FROM python:3.5.2
WORKDIR /weather
ENTRYPOINT ["/weather"]
ADD . /weather
RUN chmod u+x sche.py
RUN chmod u+x weather.py
RUN mkdir /usr/bin/crontab
#add due to /usr/bin/crontab not found
RUN pip3 install python-crontab
RUN pip3 install -r requirements.txt
EXPOSE 80
#ENV NAME World
CMD ["sudo"]
#CMD ["python", "sche.py"] ## build step fail
ENTRYPOINT ["python","sche.py"]
## can build same as "RUN ["python","sche.py"] "
```
I expect it can run in docker image rather than each python file only.
Try USER root after FROM python:3.5.2 line.
Remove CMD ["sudo"] and ENTRYPOINT ["/weather"]
Updated
Replace RUN mkdir /usr/bin/crontab
RUN apt-get update \
&& apt-get install -y cron \
&& apt-get autoremove -y

Categories

Resources