I have a requirements.txt file containing, amongst others:
Flask-RQ==0.2
-e git+https://token:x-oauth-basic#github.com/user/repo.git#egg=repo
When I try to build a Docker container using Docker Compose, it downloads both packages, and install them both, but when I do a pip freeze there is no sign of the -e package. When I try to run the app, it looks as if this package hasn't been installed. Here's the relevant output from the build:
Collecting Flask-RQ==0.2 (from -r requirements.txt (line 3))
Downloading Flask-RQ-0.2.tar.gz
Obtaining repo from git+https://token:x-oauth-basic#github.com/user/repo.git#egg=repo (from -r requirements.txt (line 4))
Cloning https://token:x-oauth-basic#github.com/user/repo.git to ./src/repo
And here's my Dockerfile:
FROM python:2.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r requirements.txt
COPY . /usr/src/app
I find this situation very strange and would appreciate any help.
I ran into a similar issue, and one possible way that the problem can appear is from:
WORKDIR /usr/src/app
being set before pip install. pip will create the src/ directory (where the package is installed) inside of the WORKDIR. Now all of this shouldn't be an issue since your app files, when copied over, should not overwrite the src/ directory.
However, you might be mounting a volume to /usr/src/app. When you do that, you'll overwrite the /usr/src/app/src directory and then your package will not be found.
So one fix is to move WORKDIR after the pip install. So your Dockerfile will look like:
FROM python:2.7
RUN mkdir -p /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r /usr/src/app/requirements.txt
COPY . /usr/src/app
WORKDIR /usr/src/app
This fixed it for me. Hopefully it'll work for you.
#mikexstudios is correct, this happens because pip stores the package source in /usr/src/app/src, but you're mounting a local directory over top of it, meaning python can't find the package source.
Rather than changing the position of WORKDIR, I solved it by changing the pip command to:
pip install -r requirements.txt --src /usr/local/src
Either approach should work.
If you are recieving a similar error when installing a git repo from a requirements file under a dockerized container, you may have forgotten to install git.
Here is the error I recieved:
Downloading/unpacking CMRESHandler from
git+git://github.com/zigius/python-elasticsearch-logger.git (from -r
/home/ubuntu/requirements.txt (line 5))
Cloning git://github.com/zigius/python-elasticsearch-logger.git to
/tmp/pip_build_root/CMRESHandler
Cleaning up...
Cannot find command 'git'
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c useradd ubuntu -b /home && echo
"ubuntu ALL = NOPASSWD: ALL" >> /etc/sudoers &&
chown -R ubuntu:ubuntu /home/ubuntu && pip install -r /home/ubuntu/requirements.txt returned a non-zero code: 1
Here is an example Dockerfile that installs git and then installs all requirements:
FROM python:3.5-slim
RUN apt-get update && apt-get install -y --no-install-recommends git \
ADD . /code
WORKDIR /code
RUN pip install --upgrade pip setuptools && pip install -r /home/ubuntu/requirements.txt
Now you can use git packages in your requirements file in a Dockerized environment
Related
I'm currently trying to install python packages from a private gitlab repo. Unfortunately, I get problems with the credentials. Is there any way to install this package without writing my credentials into the Dockerfile or adding my personal ssh key into it?
Dockerfile:
FROM python:3.9.12-buster AS production
RUN apt-get update && apt-get install -y git
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
requirements.txt:
fastapi
uvicorn
cycler~=0.10.0
networkx
python-multipart
git+https://gitlab.private.net/group/private-repo.git#commit_hash#egg=foo
Error message:
#10 3.760 Cloning https://gitlab.private.net/group/private-repo.git (to revision commit_hash) to /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 3.769 Running command git clone --filter=blob:none --quiet https://gitlab.private.net/group/private-repo.git /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 4.039 fatal: could not read Username for 'https://gitlab.private.net/group/private-repo.git': No such device or address
#10 4.060 error: subprocess-exited-with-error
Generally speaking, you can use multi-stage docker builds to make sure your credentials don't stay in the image.
In your case, you might do something like this:
FROM python:3.9.12-buster as download
RUN apt-get update && apt-get install -y git
RUN pip install --upgrade pip wheel
ARG GIT_USERNAME
ARG GIT_PASSWORD
WORKDIR /build
COPY requirements.txt .
# add password to requirements file
RUN sed -i -E "s|gitlab.private.net|$GIT_USERNAME:$GIT_PASSWORD#gitlab.private.net|" requirements.txt
# download dependencies and build wheels to /build/dist
RUN python -m pip wheel -w /build/dist -r requirements.txt
FROM python:3.9.12-buster as production
WORKDIR /app
COPY --from=download /build/dist /wheelhouse
# install dependencies from the wheels created in previous build stage
RUN pip install --no-index /wheelhouse/*.whl
COPY . .
# ... the rest of your dockerfile
In GitLab CI, you might use the build command like this:
script:
# ...
- docker build --build-arg GIT_USERNAME=gitlab-ci-token --build-arg GIT_PASSWORD=$CI_JOB_TOKEN -t $CI_REGISTRY_IMAGE .
Then your image will be built and the final image won't contain your credentials. It will also be smaller since you don't have to install git :)
As a side note, you can simplify this somewhat by using the GitLab PyPI package registry.
So I also had to install my dependencies from private package repository for my python project.
This was the Dockerfile I used for building my project.
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gettext gcc libpq-dev python3-dev build-essential python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
RUN pip config set global.extra-index-url https://<personal_access_token_name>:<personal_access_token>#gitlab.com/simple/
# you need to configure pip to pull packages from remote private repository.
# for gitlab you require personal access token to access them with read permissions
COPY . /code/
RUN --mount=type=cache,target=/root/.cache pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.cache pip install -r /code/webapi/requirements.txt
WORKDIR /code/webapi
ENTRYPOINT /code/webapi/entrypoint.sh
This question already has answers here:
WARNING: Running pip as the 'root' user
(4 answers)
Closed 10 months ago.
I have this Dockerfile:
FROM python:3.8-slim
WORKDIR /app
COPY . .
RUN apt-get update
RUN apt-get install -y python3 python3-pip python3-venv
RUN pip freeze > requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python3", "main.py"]
Everything works file until this line:
RUN pip install --no-cache-dir -r requirements.txt
Using docker run --rm -it name bash and pip install -r requirements.txt then I found this error:
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting
behaviour with the system package manager. It is recommended to use a virtual environment
instead: https://pip.pypa.io/warnings/venv
Here, I found solution (which didn't work for me), that it's possible to resolve just by creating new user, but it doesn't seem to be optimal solution. How can I fix this?
In this case the problem was in version of images. Using this Dockerfile I was able to fix this:
FROM python:3.9.3
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python3", "main.py"]
PS. I don't really know if it was about it, but this images has the same python version, that I have on my computer. I could have impact on dependencies.
I am trying to install a package that is not on PyPi. i.e from github. Adding the repo as git+url to the requirements file gives
ERROR: Error [Errno 2] No such file or directory: 'git' while executing command git clone -q https://github.com/Rapptz/discord-ext-menus /tmp/pip-req-build-147rct22
ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH?
Installing the packages is done with
RUN python3 -m pip install -r requirements.txt
as specified in the docs
I also tried the solutions from this, but the answers mess up my other packages.
The dockerfile is almost directly from the docs
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN python3 -m pip install -r requirements.txt
COPY . .
CMD [ "python3", "main.py"]
requirements.txt
asyncpg==0.21.0
git+https://github.com/Rapptz/discord-ext-menus
discord.py==1.7.0
pre-commit==2.10.1
pyclean==2.0.0
pylint==2.6.0
python-dotenv==0.15.0
As the error tells us, we have to simply install git, so that pip can clone the repo and run the setup file.
We can install git with
RUN apt-get update && apt-get install -y git
We also have to build from a python image, the above answer works with python:3.8-slim-buster
I am having lots of problems creating a Docker container using python:3.6-alpine for Plotly. Plotly also uses Pandas and Numpy. When I run my Dockerfile below, the "RUN venv/bin/pip install -r requirements.txt" fails. Anyone have recommendations for this, am I missing requirements?
FROM python:3.6-alpine
RUN adduser -D visualdata
RUN pip install --upgrade pip
WORKDIR /home/visualdata
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn
#RUN venv/bin/pip install install python3-pymysql
COPY app app
COPY migrations migrations
COPY visualdata.py config.py boot.sh ./
RUN chmod a+x boot.sh
ENV FLASK_APP visualdata.py
RUN chown -R visualdata:visualdata ./
USER visualdata
EXPOSE 8000
ENTRYPOINT ["./boot.sh"]
If you look at the Python docker image official repository, there is a Dockerfile example that illustrates the pip step:
RUN pip install --no-cache-dir -r requirements.txt
You should be able to use pip directly instead of venv/bin/pip.
You do not really need to use a virtualenv in a docker container if you are only running one application inside. The container already provides its own isolated environment.
We are trying to create a Docker container for a python application. The Dockerfile installs dependencies using "pip install". The Dockerfile looks like
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y git wget python3-pip
RUN mkdir /app
COPY . /app
RUN pip3 install asn1crypto
RUN pip3 install cffi==1.10.0
RUN pip3 install click==6.7
RUN pip3 install conda==4.3.16
RUN pip3 install Flask==0.12.2
RUN pip3 install Flask-SSLify==0.1.5
RUN pip3 install Flask-SSLify==0.1.5
RUN pip3 install flask-restful==0.3.6
WORKDIR /app
ENTRYPOINT ["python3"]
CMD [ "X.py", "/app/Y.yml" ]
The docker gets created successfully the issue is on the rebuild time.
If nothing is changed in the dockerfile above
If a line is changed in the dockerfile which is after pip install the docker daemon still runs all the commands in pip install, downloading all the packages though not installing them.
Is there a way to optimize the rebuild?
Thx
Below is what i would like to do momentarily with the Dockerfile for optimization -
FROM ubuntu:latest
RUN apt-get update -y && apt-get install -y \
git \
wget \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY ./requirements.txt .
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT ["python3"]
CMD [ "X.py", "/app/Y.yml" ]
Reduce the layers by integrating multiple commands into a single one specifically when they are interdependent. This helps reducing the image size.
Always try to use the COPY at the end since a regular source code change may invalidate the next layer caching.
Use a single requirements.txt file for installation through pip. Also define separate steps in case you have lots of packages to install, don't let a normal source code change force packages installation on every build.
Always cleanup the intermediate things which are not required in the final image.
Ref- https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/