I need to install a PIP library in my docker image and run it from the very same Dockerfile.
Example:
from python
COPY requirements.txt /app/requirements.txt
RUN pip3 install -U -r ete3
RUN 'python3 -c "from ete3 import NCBITaxa;NCBITaxa();"'
CMD ["gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-c", "gunicorn_conf.py", "api.main:app"]
running this fails with
/bin/sh: 1: python3 -c "import ete3": not found
The command '/bin/sh -c 'python3 -c "from ete3 import NCBITaxa;NCBITaxa();"'' returned a non-zero code: 127
however, if I remove the RUN directive, finish the build, attach to the running container
python3 -c "from ete3 import NCBITaxa;NCBITaxa();" will work.
I assume since its pip and installed into /usr/local/lib/python3.8/dist-packages/ete3 it should be in the PYTHONPATH? Or is it that because this is at image build time the system is not yet available? Any tricks?
I need this functionality to pull some NCBI database updates at build time into the image so that this ete3 library is up2date for deployment. ete3 would pull the files at runtime and that is slow and terrible.
One alternative could be to prebuild an image with the PIP library installed from Dockerfile1 and run the update in Dockerfile2 ?
You need to put the commands you want to run into a python script like something.py.
Your something.py file can look like,
import eta3
...
...
...
Then change the last line to this,
CMD python /home/username/something.py
Based on your comment, you can do the same thing with a bash script and run it using the CMD command in the dockerfile like this.
Your script will have the commands you want to run like this.
#!/bin/bash
gunicorn -k uvicorn.workers.UvicornWorker -c gunicorn_conf.py
And you can add this line to your docker file.
CMD /bash_script.sh
Related
I'm trying to run a run a file from a GitHub repo using the Command Prompt on Windows. I started with these commands:
python -m pip install virtualenv
python -m virtualenv ocopus_venv
.\venv\Scripts\activate.bat
curl -O https://github.com/zuphilip/ocropy-models/raw/master/en-default.pyrnn.gz
move en-default.pyrnn.gz models
No errors so far, but when I run:
./ocropus-nlbin tests/ersch.png -o book
I got this error: the '.' is not recognized
How can I make this command run properly?
On Windows cmd this doesn't work. Try without the ./ at the beggining. Condering ocropy supports Windows you can run it also wtih python2.7.
python ocropus-nlbin tests/ersch.png -o book
Don't forget you have to run python setup.py install before.
I would like to use the python3 executable when I use it from crontab as manual launch (ssh session).
bash script
#!/bin/bash
PYTHONPATH="$(which python3)"
echo $PYTHONPATH
python3 test.py
result from ssh command line, launched manually
/usr/local/bin/python3
result in log file from crontab -e
/usr/bin/python3
I would like the script launched by the crontab, uses /usr/local/bin/python3 executable instead of /usr/bin/python3
OR
if it's not possible, use the dependencies of my code available for /usr/bin/python3
How can I achieve this ? Thank you very much for your help
the python inside the docker container will not necessarily have the same path. If you want all the modules installed on your VM python3, create a requirements.txt file using pip freeze > requirements.txe and COPY this file as part of your Dockerfile and install it while building the image pip install -r requirements.txt
I have built a docker image using a Dockerfile that does the following:
FROM my-base-python-image
WORKDIR /opt/data/projects/project/
RUN mkdir files
RUN rm -rf /etc/yum.repos.d/*.repo
COPY rss-centos-7-config.repo /etc/yum.repos.d/
COPY files/ files/
RUN python -m venv /opt/venv && . /opt/venv/activate
RUN yum install -y unzip
WORKDIR files/
RUN unzip file.zip && rm -rf file.zip && . /opt/venv/bin/activate && python -m pip install *
WORKDIR /opt/data/projects/project/
That builds an image that allows me to run a custom command. In a terminal, for instance, here is the commmand I run after activating my project venv:
python -m pathA.ModuleB -a inputfile_a.json -b inputfile_b.json -c
Arguments a & b are custom tags to identify input files. -c calls a block of code.
So to run the built image successfully, I run the container and map local files to input files:
docker run --rm -it -v /local/inputfile_a.json:/opt/data/projects/project/inputfile_a.json -v /local/inputfile_b.json:/opt/data/projects/project/inputfile_b.json image-name:latest bash -c 'source /opt/venv/bin/activate && python -m pathA.ModuleB -a inputfile_a.json -b inputfile_b.json -c'
Besides shortening file paths, is there anythin I can do to shorten the docker run command? I'm thinking that adding a CMD and/or ENTRYPOINT to the Dockerfile would help, but I cannot figure out how to do it as I get errors.
There are a couple of things you can do to improve this.
The simplest is to run the application outside of Docker. You mention that you have a working Python virtual environment. A design goal of Docker is that programs in containers can't generally access files on the host, so if your application is all about reading and writing host files, Docker may not be a good fit.
Your file paths inside the container are fairly long, and this is bloating your -v mount options. You don't need an /opt/data/projects/project prefix; it's very typical just to use short paths like /app or /data.
You're also installing your application into a Python virtual environment, but inside a Docker image, which provides its own isolation. As you're seeing in your docker run command and elsewhere, the mechanics of activating a virtual environment in Docker are a little hairy. It's also not necessary; just skip the virtual environment setup altogether. (You can also directly run /opt/venv/bin/python and it knows it "belongs to" a virtual environment, without explicitly activating it.)
Finally, in your setup.py file, you can use a setuptools entry_points declaration to provide a script that runs your named module.
That can reduce your Dockerfile to more or less
FROM my-base-python-image
# OS-level setup
RUN rm -rf /etc/yum.repos.d/*.repo
COPY rss-centos-7-config.repo /etc/yum.repos.d/
RUN yum install -y unzip
# Copy the application in
WORKDIR /app/files
COPY files/ ./
RUN unzip file.zip \
&& rm file.zip \
&& pip install *
# Typical runtime metadata
WORKDIR /app
CMD main-script --help
And then when you run it, you can:
docker run --rm -it \
-v /local:/data \ # just map the entire directory
image-name:latest \
main-script -a /data/inputfile_a.json -b /data/inputfile_b.json -c
You can also consider the docker run -w /data option to change the current directory, which would add a Docker-level argument but slightly shorten the script command line.
I am trying to follow the instructions on this page:
How do I create a Lambda layer using a simulated Lambda environment with Docker?
on a Windows 7 environment.
I followed all of these steps:
installed Docker Toolbox
created a local folder 'C:/Users/Myname/mylayer' containing requirements.txt and python 3.8 folder structure
run the following commands in docker toolbox:
cd c:/users/myname/mylayer
docker run -v "$PWD":/var/task "lambci/lambda:build-python3.8" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages/; exit"
It returns the following error:
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
I don't understand what I am doing wrong. Maybe something obvious (I'm a beginner) but I spent the whole day trying to figure it out and it is getting quite frustrating. Appreciate the help!
I ran the following in Windows 10 Powershell and it worked
docker run -v ${pwd}:/var/task "amazon/aws-sam-cli-build-image-python3.8" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages; exit"
Two questions:
Is there a Python equivalent to forever.js to run a Python process in the background without requiring sudo?
Is it possible to use forever.js with Python? How about with a virtualenv?
It is easy to use Python with forever.js:
forever start -c python python_script.py
To use it with virtualenv is a little bit more complicated, I did it using a bash script (call it python_virtualenv):
#!/bin/bash
# Script to run a Python file using the local virtualenv
source bin/activate
bin/python $#
Now use that script with forever:
forever start -c ./python_virtualenv python_script.py
I was having problems executing a python script with custom logging paths, after trying I got to work with the next command:
forever start -c python -l /tmp/forever.log -o /tmp/out.log -e /tmp/error.log python_script.py
Tell me if it worked for you
Using python 3 with Flask to run with forever.js, here is my build process
python3 -m venv venv
source venv/bin/activate
sudo -H pip3 install -r requirements.txt
FLASK_APP=app.py forever start -c python3 app.py --port=5001