Installing python package from private gitlab repo in Dockerfile - python

I'm currently trying to install python packages from a private gitlab repo. Unfortunately, I get problems with the credentials. Is there any way to install this package without writing my credentials into the Dockerfile or adding my personal ssh key into it?
Dockerfile:
FROM python:3.9.12-buster AS production
RUN apt-get update && apt-get install -y git
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
requirements.txt:
fastapi
uvicorn
cycler~=0.10.0
networkx
python-multipart
git+https://gitlab.private.net/group/private-repo.git#commit_hash#egg=foo
Error message:
#10 3.760 Cloning https://gitlab.private.net/group/private-repo.git (to revision commit_hash) to /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 3.769 Running command git clone --filter=blob:none --quiet https://gitlab.private.net/group/private-repo.git /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 4.039 fatal: could not read Username for 'https://gitlab.private.net/group/private-repo.git': No such device or address
#10 4.060 error: subprocess-exited-with-error

Generally speaking, you can use multi-stage docker builds to make sure your credentials don't stay in the image.
In your case, you might do something like this:
FROM python:3.9.12-buster as download
RUN apt-get update && apt-get install -y git
RUN pip install --upgrade pip wheel
ARG GIT_USERNAME
ARG GIT_PASSWORD
WORKDIR /build
COPY requirements.txt .
# add password to requirements file
RUN sed -i -E "s|gitlab.private.net|$GIT_USERNAME:$GIT_PASSWORD#gitlab.private.net|" requirements.txt
# download dependencies and build wheels to /build/dist
RUN python -m pip wheel -w /build/dist -r requirements.txt
FROM python:3.9.12-buster as production
WORKDIR /app
COPY --from=download /build/dist /wheelhouse
# install dependencies from the wheels created in previous build stage
RUN pip install --no-index /wheelhouse/*.whl
COPY . .
# ... the rest of your dockerfile
In GitLab CI, you might use the build command like this:
script:
# ...
- docker build --build-arg GIT_USERNAME=gitlab-ci-token --build-arg GIT_PASSWORD=$CI_JOB_TOKEN -t $CI_REGISTRY_IMAGE .
Then your image will be built and the final image won't contain your credentials. It will also be smaller since you don't have to install git :)
As a side note, you can simplify this somewhat by using the GitLab PyPI package registry.

So I also had to install my dependencies from private package repository for my python project.
This was the Dockerfile I used for building my project.
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gettext gcc libpq-dev python3-dev build-essential python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
RUN pip config set global.extra-index-url https://<personal_access_token_name>:<personal_access_token>#gitlab.com/simple/
# you need to configure pip to pull packages from remote private repository.
# for gitlab you require personal access token to access them with read permissions
COPY . /code/
RUN --mount=type=cache,target=/root/.cache pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.cache pip install -r /code/webapi/requirements.txt
WORKDIR /code/webapi
ENTRYPOINT /code/webapi/entrypoint.sh

Related

problem install pika (rabbitmq sdk in python ) in docker _ no module named 'pika'

I am trying to install rabbitmq (pika) driver in my python container, but in local deployment, there is no problem.
FROM ubuntu:20.04
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN apt-get update && apt-get -y install gcc python3.7 python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python","index.py"]
this is my requerments.txt file :
requests
telethon
Flask
flask-mongoengine
Flask_JWT_Extended
Flask_Bcrypt
flask-restful
flask-cors
jsonschema
werkzeug
pandas
xlrd
Kanpai
pika
Flask-APScheduler
docker build steps complete with no error and install all the dependencies with no error but when I try to run my container it crashes with this error :
no module named 'pika'
installing python3.7 will not work here, you are still using python3.8 by using pip3 command and your CMD will also start python3.8, I suggest you to use python:3.7 base image
so try this:
FROM python:3.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN apt-get update && apt-get -y install gcc
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
CMD ["python","index.py"]

Multi-stage Dockerfile not working for python

Currently I am creating a virtual environment in the first stage.
Running command pip install -r requirements.txt , which install executables in /venv/bin dir.
In second stage i am copying the /venv/bin dir , but on running the python app error comes as module not found i.e i need to run pip install -r requirements.txt again to run the app .
The application is running in python 2.7 and some of the dependencies requires compiler to build . Also those dependencies are failing with alpine images compiler , and only works with ubuntu compiler or python:2.7 official image ( which in turn uses debian)
Am I missing some command in the second stage that will help in using the copied dependencies instead of installing it again .
FROM python:2.7-slim AS build
RUN apt-get update &&apt-get install -y --no-install-recommends build-essential gcc
RUN pip install --upgrade pip
RUN python3 -m venv /venv
COPY ./requirements.txt /project/requirements/
RUN /venv/bin/pip install -r /project/requirements/requirements.txt
COPY . /venv/bin
FROM python:2.7-slim AS release
COPY --from=build /venv /venv
WORKDIR /venv/bin
RUN apt-get update && apt-get install -y --no-install-recommends build-essential gcc
#RUN pip install -r requirements.txt //
RUN cp settings.py.sample settings.py
CMD ["/venv/bin/python3", "-m", "main.py"]
I am trying to avoid pip install -r requirements.txt in second stage to reduce the image size which is not happening currently.
Only copying the bin dir isn't enough; for example, packages are installed in lib/pythonX.X/site-packages and headers under include. I'd just copy the whole venv directory. You can also run it with --no-cache-dir to avoid saving the wheel archives.
insert before all
FROM yourimage:tag AS build

Docker re-build time

We are trying to create a Docker container for a python application. The Dockerfile installs dependencies using "pip install". The Dockerfile looks like
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y git wget python3-pip
RUN mkdir /app
COPY . /app
RUN pip3 install asn1crypto
RUN pip3 install cffi==1.10.0
RUN pip3 install click==6.7
RUN pip3 install conda==4.3.16
RUN pip3 install Flask==0.12.2
RUN pip3 install Flask-SSLify==0.1.5
RUN pip3 install Flask-SSLify==0.1.5
RUN pip3 install flask-restful==0.3.6
WORKDIR /app
ENTRYPOINT ["python3"]
CMD [ "X.py", "/app/Y.yml" ]
The docker gets created successfully the issue is on the rebuild time.
If nothing is changed in the dockerfile above
If a line is changed in the dockerfile which is after pip install the docker daemon still runs all the commands in pip install, downloading all the packages though not installing them.
Is there a way to optimize the rebuild?
Thx
Below is what i would like to do momentarily with the Dockerfile for optimization -
FROM ubuntu:latest
RUN apt-get update -y && apt-get install -y \
git \
wget \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY ./requirements.txt .
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT ["python3"]
CMD [ "X.py", "/app/Y.yml" ]
Reduce the layers by integrating multiple commands into a single one specifically when they are interdependent. This helps reducing the image size.
Always try to use the COPY at the end since a regular source code change may invalidate the next layer caching.
Use a single requirements.txt file for installation through pip. Also define separate steps in case you have lots of packages to install, don't let a normal source code change force packages installation on every build.
Always cleanup the intermediate things which are not required in the final image.
Ref- https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/

Flask application on Docker with Let's Encrypt

I want to create a Flask application in a Docker instance that has HTTPS enabled using the Let's Encrypt method of obtaining an SSL Cert. The cert also needs to be auto-renewed every so often (3 months I think), which is already done on my server but the Flask app needs to get access to the file also!
What would I need to modify on this Docker file to enable Let's encrypt?
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["web/app.py"]
You can use the docker volume feature:
A volume is a mount directory between the host machine and the container.
There is two ways to create a volume with docker:
You can declare a VOLUME command in the dockerfile
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
VOLUME /certdir
ENTRYPOINT ["python"]
CMD ["web/app.py"]
This will create a directory named after the container id inside /var/lib/docker/volumes.
This solution is more useful when you want to share something from the container to the host but is not very practical when it's the other way around.
You can use the -v flag on docker create or docker run to add a volume to the container:
docker run -v /certdir ./certdir web/app.py
Where /certdir is the directory /certdir inside the container and ./certdir is the one on the host inside your project directory.
This solution will work since the host directory will be mounted inside the container at the defined location. But without specifying it clearly in some documentation or provide a easy to use alias for your docker run/create command other user will not know how to define it.
PS:quick tip:
Put your RUN commands inside one single statement:
```
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y \
&& apt-get install -y python-pip python-dev build-essential \
&& pip install --upgrade pip
COPY . /app
```
The advantage is docker will create only one layer for the installation of dependencies instead of three. (see documentation)

Pip install -e packages don't appear in Docker

I have a requirements.txt file containing, amongst others:
Flask-RQ==0.2
-e git+https://token:x-oauth-basic#github.com/user/repo.git#egg=repo
When I try to build a Docker container using Docker Compose, it downloads both packages, and install them both, but when I do a pip freeze there is no sign of the -e package. When I try to run the app, it looks as if this package hasn't been installed. Here's the relevant output from the build:
Collecting Flask-RQ==0.2 (from -r requirements.txt (line 3))
Downloading Flask-RQ-0.2.tar.gz
Obtaining repo from git+https://token:x-oauth-basic#github.com/user/repo.git#egg=repo (from -r requirements.txt (line 4))
Cloning https://token:x-oauth-basic#github.com/user/repo.git to ./src/repo
And here's my Dockerfile:
FROM python:2.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r requirements.txt
COPY . /usr/src/app
I find this situation very strange and would appreciate any help.
I ran into a similar issue, and one possible way that the problem can appear is from:
WORKDIR /usr/src/app
being set before pip install. pip will create the src/ directory (where the package is installed) inside of the WORKDIR. Now all of this shouldn't be an issue since your app files, when copied over, should not overwrite the src/ directory.
However, you might be mounting a volume to /usr/src/app. When you do that, you'll overwrite the /usr/src/app/src directory and then your package will not be found.
So one fix is to move WORKDIR after the pip install. So your Dockerfile will look like:
FROM python:2.7
RUN mkdir -p /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r /usr/src/app/requirements.txt
COPY . /usr/src/app
WORKDIR /usr/src/app
This fixed it for me. Hopefully it'll work for you.
#mikexstudios is correct, this happens because pip stores the package source in /usr/src/app/src, but you're mounting a local directory over top of it, meaning python can't find the package source.
Rather than changing the position of WORKDIR, I solved it by changing the pip command to:
pip install -r requirements.txt --src /usr/local/src
Either approach should work.
If you are recieving a similar error when installing a git repo from a requirements file under a dockerized container, you may have forgotten to install git.
Here is the error I recieved:
Downloading/unpacking CMRESHandler from
git+git://github.com/zigius/python-elasticsearch-logger.git (from -r
/home/ubuntu/requirements.txt (line 5))
Cloning git://github.com/zigius/python-elasticsearch-logger.git to
/tmp/pip_build_root/CMRESHandler
Cleaning up...
Cannot find command 'git'
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c useradd ubuntu -b /home && echo
"ubuntu ALL = NOPASSWD: ALL" >> /etc/sudoers &&
chown -R ubuntu:ubuntu /home/ubuntu && pip install -r /home/ubuntu/requirements.txt returned a non-zero code: 1
Here is an example Dockerfile that installs git and then installs all requirements:
FROM python:3.5-slim
RUN apt-get update && apt-get install -y --no-install-recommends git \
ADD . /code
WORKDIR /code
RUN pip install --upgrade pip setuptools && pip install -r /home/ubuntu/requirements.txt
Now you can use git packages in your requirements file in a Dockerized environment

Categories

Resources