Why can't my Gitlab pipeline find python packages installed in Dockerfile? - python

My file structure is as follows:
Dockerfile
.gitlab-ci.yml
Here is my Dockerfile:
FROM python:3
RUN apt-get update && apt-get install make
RUN apt-get install -y python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install pygdbmi
RUN pip3 install pyyaml
RUN pip3 install Path
And here is my .gitlab-ci.yml file:
test-job:
stage: test
image: runners:test-harness
script:
- cd test-harness
# - pip3 install pygdbmi
# - pip3 install pyyaml
- python3 main.py
artifacts:
untracked: false
when: on_success
expire_in: "30 days"
paths:
- test-harness/script.log
For some reason the pip3 install in the Dockerfile doesn't seem to be working as I get the error:
python3 main.py
Traceback (most recent call last):
File "/builds/username/test-harness/main.py", line 6, in <module>
from pygdbmi.gdbcontroller import GdbController
ModuleNotFoundError: No module named 'pygdbmi'
When I uncomment the two commented lines in .gitlab-ci.yml:
# - pip3 install pygdbmi
# - pip3 install pyyaml
It works fine but ideally, I want those 2 packages to be installed in the Dockerfile not the .gitlab-ci.yml pipeline stage
I've tried changing the WORKDIR as well as USER and it doesn't seem to have any effect.
Any ideas/solutions?

Related

Can't install Python modules in Gitlab CI

I can't get Gst (or seemingly any Python module) to install inside Gitlab CI
Here is my .gitlab-ci.yml file
image: "python:3.7"
before_script:
- apt-get -qq update
- apt-get -qq install -y python3-dev python3-pip libgirepository1.0-dev
- python3 --version
- python3 -m pip install --upgrade pip
- pip3 install -r requirements.txt
test:
script:
- pip3 install flake8 # you can also use tox
- flake8 --max-line-length=254 --extend-ignore=F401,E402 matinee.py
run:
script:
- ./matinee.py -h
But no matter what I tried the closest I got to was:
$ ./matinee.py -h
Traceback (most recent call last):
File "./matinee.py", line 6, in <module>
gi.require_version('Gst', '1.0')
File "/usr/local/lib/python3.7/site-packages/gi/__init__.py", line 126, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Gst not available
Before that it complained that gi didn't exist (?) so basically it seems like python modules are not installable this way.
It's been a week now, and frankly I'm giving up on setting up Gitlab CI for python ; I have nodejs projects with tons of deps passing installing without a glitch ; This post is my last hail Mary attempt :|
Naturally, everything works perfectly locally (Ubuntu 22)
My requirements file
# cat requirements.txt
PyGObject==3.42.1
What I tried
image: "python:3.7"
image: "python:latest"
image: "ubuntu:latest"
venv
And everything on this page. Please, help me understand why I can't install Python modules in Gitlab CI.

How can we use opencv in a multistage docker image?

I recently learned about the concept of building docker images based on a multi-staged Dockerfile.
I have been trying simple examples of multi-staged Dockerfiles, and they were working fine. However, when I tried implementing the concept for my own application, I was facing some issues.
My application is about object detection in videos, so I use python and Tensorflow.
Here is my Dockerfile:
FROM python:3-slim AS base
WORKDIR /objectDetector
COPY detect_objects.py .
COPY detector.py .
COPY requirements.txt .
ADD data /objectDetector/data/
ADD models /objectDetector/models/
RUN apt-get update && \
apt-get install protobuf-compiler -y && \
apt-get install ffmpeg libsm6 libxext6 -y && \
apt-get install gcc -y
RUN pip3 install update && python3 -m pip install --upgrade pip
RUN pip3 install tensorflow-cpu==2.9.1
RUN pip3 install opencv-python==4.6.0.66
RUN pip3 install opencv-contrib-python
WORKDIR /objectDetector/models/research
RUN protoc object_detection/protos/*.proto --python_out=.
RUN cp object_detection/packages/tf2/setup.py .
RUN python -m pip install .
RUN python object_detection/builders/model_builder_tf2_test.py
WORKDIR /objectDetector/models/research
RUN pip3 install wheel && pip3 wheel . --wheel-dir=./wheels
FROM python:3-slim
RUN pip3 install update && python3 -m pip install --upgrade pip
COPY --from=base /objectDetector /objectDetector
WORKDIR /objectDetector
RUN pip3 install --no-index --find-links=/objectDetector/models/research/wheels -r requirements.txt
When I try to run my application in the final stage of the container, I receive the following error:
root#3f062f9a5d64:/objectDetector# python detect_objects.py
Traceback (most recent call last):
File "/objectDetector/detect_objects.py", line 3, in <module>
import cv2
ModuleNotFoundError: No module named 'cv2'
So per my understanding, it seems that opencv-python is not successfully moved from the 1st stage to the 2nd.
I have been searching around, and I found some good blogs and questions tackling the issue of multi-staging Dockerfiles, specifically for python libraries. However, it seems I missing something here.
Here are some references that I have been following to solve the issue:
How do I reduce a python (docker) image size using a multi-stage build?
Multi-stage build usage for cuda,cudnn,opencv and ffmpeg #806
So my question is: How can we use opencv in a multistage docker image?

aws eb opencv-python "web: from .cv2 import"

in aws-eb I am deployed an application -django- and there was no error on that process. Health is green and OK but page is giving Internal Server Error. so I checked the logs and saw the below error.
... web: from .cv2 import
... web: ImportError: libGL.so.1: cannot open shared object file: No such file or directory
while installing requirements.txt on deployment process opencv must be installed. because it includes opencv-python==4.5.5.64
so I not quite sure what is the above error pointing at.
and helpers.py this is how I am import it.
import requests
import cv2
libGL.so is installed with the package libgl1, pip3 install opencv-python is not sufficient here.
Connect the aws via ssh and run;
apt-get update && apt-get install libgl1
Or even better, consider using docker containers for the project and add the installation commands to the Dockerfile.
Also, as https://stackoverflow.com/a/66473309/12416058 suggests, Package python3-opencv includes all system dependencies of OpenCV. so installing it may prevent further errors.
To install python3-opencv;
apt-get update && apt-get install -y python3-opencv
pip install -r requirements.txt
To install in Dockerfile:
RUN apt-get update && apt-get install -y python3-opencv
RUN pip install -r requirements.txt

docker not installing requirements on docker

FROM ubuntu:latest
RUN apt-get update
RUN apt-get install
RUN apt install python3.9 -y
RUN apt-get install -y git
RUN apt-get -y install python3-pip
RUN git clone https://ACCESS_TOKEN#github.com/username/repo
WORKDIR ./appy/
RUN pip install -r requirements.txt
CMD ["python3.9", "main.py"]
Hey, for some reason I'm getings
Traceback (most recent call last):
File "/appy/main.py", line 7, in <module>
import disnake
ModuleNotFoundError: No module named 'disnake'
requirements.txt
disnake=2.4.0
psutil=5.8.0
motor=2.5.1
aiohttp=3.7.4.post0
It appears that the packages in requirements.txt are not being installed properly. Any suggestions to what could be causing this? when building the container there doesn't seem to be any errors.And I can visually see when the container is built that all packages are installed. Including disnake.

Circle-CI ImportError: Failed to import test module Python 3.7.0

I'm trying to setup Circle-CI for the first time for my application. It's a python 3.7.0 based app with a few tests. The app builds just fine, but fails when running the test job. Locally the tests work fine, so I assume I'm missing some Circle-CI configuration?
This is my yaml:
version: 2.0
jobs:
build:
docker:
- image: circleci/python:3.7.0
steps:
- checkout
- run:
name: "Run tests"
command: python -m unittest
This is the error:
======================================================================
ERROR: tests.test_auth (unittest.loader._FailedTest)
ImportError: Failed to import test module: tests.test_auth
Traceback (most recent call last):
File "/usr/local/lib/python3.7/unittest/loader.py", line 434, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/local/lib/python3.7/unittest/loader.py", line 375, in _get_module_from_name
import(name)
File "/home/circleci/project/tests/test_auth.py", line 5, in
from werkzeug.datastructures import MultiDict
ModuleNotFoundError: No module named 'werkzeug'
What am I missing?
EDIT:
I have added now pip install -r requirements.txt but I get now:
Could not install packages due to an EnvironmentError: Errno 13] Permission denied: '/usr/local/lib/python3.7/site-packages/MarkupSafe-1.1.1.dist-info'
EDIT:
In addition to the answer, here is complete yaml configuration working:
version: 2.0
jobs:
build:
docker:
- image: circleci/python:3.7.0
steps:
- checkout
- run:
name: "Install dependencies"
command: |
python3 -m venv venv
. venv/bin/activate
pip install --upgrade pip
pip install --no-cache-dir -r requirements.txt
- run:
name: "Run tests"
command: |
. venv/bin/activate
python -m unittest
It simply means that a dependency 'werkzeug' is not installed. You might need to install additional packages which are required separately.
Consider adding the dependency installations to the Dockerfile something like below
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
If you get permission denied issues, then your tests are started with a user who have no privileges to manage python. But its unlikely to be so.

Categories

Resources