aws eb opencv-python "web: from .cv2 import" - python

in aws-eb I am deployed an application -django- and there was no error on that process. Health is green and OK but page is giving Internal Server Error. so I checked the logs and saw the below error.
... web: from .cv2 import
... web: ImportError: libGL.so.1: cannot open shared object file: No such file or directory
while installing requirements.txt on deployment process opencv must be installed. because it includes opencv-python==4.5.5.64
so I not quite sure what is the above error pointing at.
and helpers.py this is how I am import it.
import requests
import cv2

libGL.so is installed with the package libgl1, pip3 install opencv-python is not sufficient here.
Connect the aws via ssh and run;
apt-get update && apt-get install libgl1
Or even better, consider using docker containers for the project and add the installation commands to the Dockerfile.
Also, as https://stackoverflow.com/a/66473309/12416058 suggests, Package python3-opencv includes all system dependencies of OpenCV. so installing it may prevent further errors.
To install python3-opencv;
apt-get update && apt-get install -y python3-opencv
pip install -r requirements.txt
To install in Dockerfile:
RUN apt-get update && apt-get install -y python3-opencv
RUN pip install -r requirements.txt

Related

How can we use opencv in a multistage docker image?

I recently learned about the concept of building docker images based on a multi-staged Dockerfile.
I have been trying simple examples of multi-staged Dockerfiles, and they were working fine. However, when I tried implementing the concept for my own application, I was facing some issues.
My application is about object detection in videos, so I use python and Tensorflow.
Here is my Dockerfile:
FROM python:3-slim AS base
WORKDIR /objectDetector
COPY detect_objects.py .
COPY detector.py .
COPY requirements.txt .
ADD data /objectDetector/data/
ADD models /objectDetector/models/
RUN apt-get update && \
apt-get install protobuf-compiler -y && \
apt-get install ffmpeg libsm6 libxext6 -y && \
apt-get install gcc -y
RUN pip3 install update && python3 -m pip install --upgrade pip
RUN pip3 install tensorflow-cpu==2.9.1
RUN pip3 install opencv-python==4.6.0.66
RUN pip3 install opencv-contrib-python
WORKDIR /objectDetector/models/research
RUN protoc object_detection/protos/*.proto --python_out=.
RUN cp object_detection/packages/tf2/setup.py .
RUN python -m pip install .
RUN python object_detection/builders/model_builder_tf2_test.py
WORKDIR /objectDetector/models/research
RUN pip3 install wheel && pip3 wheel . --wheel-dir=./wheels
FROM python:3-slim
RUN pip3 install update && python3 -m pip install --upgrade pip
COPY --from=base /objectDetector /objectDetector
WORKDIR /objectDetector
RUN pip3 install --no-index --find-links=/objectDetector/models/research/wheels -r requirements.txt
When I try to run my application in the final stage of the container, I receive the following error:
root#3f062f9a5d64:/objectDetector# python detect_objects.py
Traceback (most recent call last):
File "/objectDetector/detect_objects.py", line 3, in <module>
import cv2
ModuleNotFoundError: No module named 'cv2'
So per my understanding, it seems that opencv-python is not successfully moved from the 1st stage to the 2nd.
I have been searching around, and I found some good blogs and questions tackling the issue of multi-staging Dockerfiles, specifically for python libraries. However, it seems I missing something here.
Here are some references that I have been following to solve the issue:
How do I reduce a python (docker) image size using a multi-stage build?
Multi-stage build usage for cuda,cudnn,opencv and ffmpeg #806
So my question is: How can we use opencv in a multistage docker image?

How do I install dateinfer inside my Docker image

Some background : I'm new to understanding docker images and containers and how to write DOCKERFILE. I currently have a Dockerfile which installs all the dependencies that I want through PIP install command and so, it was very simple to build and deploy images.
But I currently have a new requirement to use the Dateinfer module and that cannot be installed through the pip install command.
The repo has to be first cloned and then has to be installed and I'm having difficulty achieving this through a DOCKERFILE. The current work around I've been following for now is to run the container and install it manually in the directory with all the other dependencies and Committing the changes with dateinfer installed.But this is a very tedious and time consuming process and I want to achieve the same by just mentioning it in the DOCKERFILE along with all my other dependencies.
This is what my Dockerfile looks like:
FROM ubuntu:20.04
RUN apt update
RUN apt upgrade -y
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get -y install tzdata
RUN apt-get install -y libenchant1c2a
RUN apt install git -y
RUN pip3 install argparse
RUN pip3 install boto3
RUN pip3 install numpy==1.19.1
RUN pip3 install scipy
RUN pip3 install pandas
RUN pip3 install scikit-learn
RUN pip3 install matplotlib
RUN pip3 install plotly
RUN pip3 install kaleido
RUN pip3 install fpdf
RUN pip3 install regex
RUN pip3 install pyenchant
RUN pip3 install openpyxl
ADD core.py /
ENTRYPOINT [ "/usr/bin/python3.8", "/core.py”]
So when I try to install Dateinfer like this:
RUN git clone https://github.com/nedap/dateinfer.git
RUN cd dateinfer
RUN pip3 install .
It throws the following error :
ERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
The command '/bin/sh -c pip3 install .' returned a non-zero code: 1
How do I solve this?
Each RUN directive in a Dockerfile runs in its own subshell. If you write something like this:
RUN cd dateinfer
That is a no-op: it starts a new shell, changes directory, and then the shell exits. When the next RUN command executes, you're back in the / directory.
The easiest way of resolving this is to include your commands in a single RUN statement:
RUN git clone https://github.com/nedap/dateinfer.git && \
cd dateinfer && \
pip3 install .
In fact, you would benefit from doing this with your other pip install commands as well; rather than a bunch of individual RUN
commands, consider instead:
RUN pip3 install \
argparse \
boto3 \
numpy==1.19.1 \
scipy \
pandas \
scikit-learn \
matplotlib \
plotly \
kaleido \
fpdf \
regex \
pyenchant \
openpyxl
That will generally be faster because pip only needs to resolve
dependencies once.
Rather than specifying all the packages individually on the command
line, you could also put them into a requirements.txt file, and then
use pip install -r requirements.txt.

problem install pika (rabbitmq sdk in python ) in docker _ no module named 'pika'

I am trying to install rabbitmq (pika) driver in my python container, but in local deployment, there is no problem.
FROM ubuntu:20.04
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN apt-get update && apt-get -y install gcc python3.7 python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python","index.py"]
this is my requerments.txt file :
requests
telethon
Flask
flask-mongoengine
Flask_JWT_Extended
Flask_Bcrypt
flask-restful
flask-cors
jsonschema
werkzeug
pandas
xlrd
Kanpai
pika
Flask-APScheduler
docker build steps complete with no error and install all the dependencies with no error but when I try to run my container it crashes with this error :
no module named 'pika'
installing python3.7 will not work here, you are still using python3.8 by using pip3 command and your CMD will also start python3.8, I suggest you to use python:3.7 base image
so try this:
FROM python:3.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN apt-get update && apt-get -y install gcc
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
CMD ["python","index.py"]

Always installing vscode plugin in docker container doesnt work

I am using vscode with docker container. I have following entry in user settings.json.
"remote.containers.defaultExtensions": [
"ms-python.python",
"ms-azuretools.vscode-docker",
"ryanluker.vscode-coverage-gutters"
]
But when I build or rebuild container, these plugins don't get installed automatically inside container.
Am I doing something wrong ?
Modified
Here is how my dockerfile looks like
FROM ubuntu:bionic
RUN apt-get update
RUN apt-get install -y python3.6 python3-pip
RUN apt-get install -y git libgl1-mesa-dev
# Currently not using requirements.txt to improve caching
#COPY requirements.txt /home/projects/my_project/
#WORKDIR /home/projects/my_project/
#RUN pip3 install -r requirements.txt
RUN pip3 install torch pandas PyYAML==5.1.2 autowrap Cython==0.29.14
RUN pip3 install numpy==1.17.3 open3d-python==0.7.0.0 pytest==5.2.4 pptk
RUN pip3 install scipy==1.3.1 natsort matplotlib lxml opencv-python==3.2.0.8
RUN pip3 install Pillow scikit-learn testfixtures
RUN pip3 install pip-licenses pylint pytest-cov
RUN pip3 install autopep8
COPY . /home/projects/my_project/
This might be an old question, but to whomever it might concern, here is one solution. I encountered this problem, that particularly the Python extension from VS Code would not install itself inside my Docker container in VS Code. In order to get it to install the python extension (and for me anything else) you have to specify the Python version, like:
"extensions": [
"ms-azuretools.vscode-docker",
"ms-python.python#2020.9.114305",
"ms-python.vscode-pylance"
]
If you want to see this in action you can clone my repository. Simply open this repo in VS Code, install the extension Remote Container, and then it should start the docker container all by itself.

How to run SSL_library_init() from Python 3.7 docker image

Up until recently I've been using openssl library within python:3.6.6-jessie docker image and thing worked as intented.
I'm using very basic Dockerfile configuration to install all necessary dependencies:
FROM python:3.6.6-jessie
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
RUN apt-get -qq update
RUN apt-get install openssl
RUN apt-get upgrade -y openssl
ADD requirements.txt /code/
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
And access and initialize the library itself with these 2 lines:
openssl = cdll.LoadLibrary("libssl.so")
openssl.SSL_library_init()
Things were working great with this approach.
This week I was doing upgrade of python and libraries and as result I switched to newer docker image:
FROM python:3.7.5
...
This immediatelly caused openssl to stop working because of this exception:
AttributeError: /usr/lib/x86_64-linux-gnu/libssl.so.1.1: undefined symbol: SSL_library_init
From this error I can understand that libssl no longer provides SSL_library_init method (or so it seems to be) which is rather weird issue because the initializer name in openssl documentation is the same.
I also tried to resolve this using -stretch and -buster distributions but the issue remains.
What is the correct approach to run SSL_library_init in those newer distributions? Maybe some additional dockerfile configuration is required?
I think you need to install libssl1.0-dev
RUN apt-get install -y libssl1.0-dev

Categories

Resources