Up until recently I've been using openssl library within python:3.6.6-jessie docker image and thing worked as intented.
I'm using very basic Dockerfile configuration to install all necessary dependencies:
FROM python:3.6.6-jessie
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
RUN apt-get -qq update
RUN apt-get install openssl
RUN apt-get upgrade -y openssl
ADD requirements.txt /code/
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
And access and initialize the library itself with these 2 lines:
openssl = cdll.LoadLibrary("libssl.so")
openssl.SSL_library_init()
Things were working great with this approach.
This week I was doing upgrade of python and libraries and as result I switched to newer docker image:
FROM python:3.7.5
...
This immediatelly caused openssl to stop working because of this exception:
AttributeError: /usr/lib/x86_64-linux-gnu/libssl.so.1.1: undefined symbol: SSL_library_init
From this error I can understand that libssl no longer provides SSL_library_init method (or so it seems to be) which is rather weird issue because the initializer name in openssl documentation is the same.
I also tried to resolve this using -stretch and -buster distributions but the issue remains.
What is the correct approach to run SSL_library_init in those newer distributions? Maybe some additional dockerfile configuration is required?
I think you need to install libssl1.0-dev
RUN apt-get install -y libssl1.0-dev
Related
I am relatively new to TensorFlow, so I have been trying to run simple applications locally, and everything was going well.
At some point I wanted to Dockerize my application. Building the Docker image went with no errors, however, when I tried to run my application, I received the following error:
AttributeError: module 'tensorflow' has no attribute 'gfile'. Did you mean: 'fill'?
After googling about the problem, I understood that it is caused by version differences between TF1 and TF2.
One of the explanation about the problem I found is found here.
Locally, I am using TF2 (specifically 2.9.1), inside a virtual environment.
When dockerizing, I also confirmed from inside the docker container that my TF version is the same.
I also tried to run the container in interactive mode, and create virtual environment, and install all dependencies manually, exactly the same way I did locally, but still with no success.
My Dockerfile is as follows:
FROM python:3-slim
# ENV VIRTUAL_ENV=/opt/venv
# RUN python3 -m venv $VIRTUAL_ENV
# ENV PATH="$VIRTUAL_ENV/bin:$PATH"
WORKDIR /objectDetector
RUN apt-get update
RUN apt-get install -y protobuf-compiler
RUN apt-get install ffmpeg libsm6 libxext6 -y
RUN pip3 install update && python3 -m pip install --upgrade pip
RUN pip3 install tensorflow==2.9.1
RUN pip3 install tensorflow-object-detection-api
RUN pip3 install opencv-python
RUN pip3 install opencv-contrib-python
COPY detect_objects.py .
COPY detector.py .
COPY helloWorld.py .
ADD data data /objectDetector/data/
ADD models /objectDetector/models/
So my question is: How can I ran an application using TensorFlow 2 from a docker container?
Am I missing something here?
Thanks in advance for any help or explanation.
I believe that in tensorflow 2.0 :
tf.gfile was replaced by tf.io.gfile
Can you try this ?
Have a nice day,
Gabriel
in aws-eb I am deployed an application -django- and there was no error on that process. Health is green and OK but page is giving Internal Server Error. so I checked the logs and saw the below error.
... web: from .cv2 import
... web: ImportError: libGL.so.1: cannot open shared object file: No such file or directory
while installing requirements.txt on deployment process opencv must be installed. because it includes opencv-python==4.5.5.64
so I not quite sure what is the above error pointing at.
and helpers.py this is how I am import it.
import requests
import cv2
libGL.so is installed with the package libgl1, pip3 install opencv-python is not sufficient here.
Connect the aws via ssh and run;
apt-get update && apt-get install libgl1
Or even better, consider using docker containers for the project and add the installation commands to the Dockerfile.
Also, as https://stackoverflow.com/a/66473309/12416058 suggests, Package python3-opencv includes all system dependencies of OpenCV. so installing it may prevent further errors.
To install python3-opencv;
apt-get update && apt-get install -y python3-opencv
pip install -r requirements.txt
To install in Dockerfile:
RUN apt-get update && apt-get install -y python3-opencv
RUN pip install -r requirements.txt
I am using vscode with docker container. I have following entry in user settings.json.
"remote.containers.defaultExtensions": [
"ms-python.python",
"ms-azuretools.vscode-docker",
"ryanluker.vscode-coverage-gutters"
]
But when I build or rebuild container, these plugins don't get installed automatically inside container.
Am I doing something wrong ?
Modified
Here is how my dockerfile looks like
FROM ubuntu:bionic
RUN apt-get update
RUN apt-get install -y python3.6 python3-pip
RUN apt-get install -y git libgl1-mesa-dev
# Currently not using requirements.txt to improve caching
#COPY requirements.txt /home/projects/my_project/
#WORKDIR /home/projects/my_project/
#RUN pip3 install -r requirements.txt
RUN pip3 install torch pandas PyYAML==5.1.2 autowrap Cython==0.29.14
RUN pip3 install numpy==1.17.3 open3d-python==0.7.0.0 pytest==5.2.4 pptk
RUN pip3 install scipy==1.3.1 natsort matplotlib lxml opencv-python==3.2.0.8
RUN pip3 install Pillow scikit-learn testfixtures
RUN pip3 install pip-licenses pylint pytest-cov
RUN pip3 install autopep8
COPY . /home/projects/my_project/
This might be an old question, but to whomever it might concern, here is one solution. I encountered this problem, that particularly the Python extension from VS Code would not install itself inside my Docker container in VS Code. In order to get it to install the python extension (and for me anything else) you have to specify the Python version, like:
"extensions": [
"ms-azuretools.vscode-docker",
"ms-python.python#2020.9.114305",
"ms-python.vscode-pylance"
]
If you want to see this in action you can clone my repository. Simply open this repo in VS Code, install the extension Remote Container, and then it should start the docker container all by itself.
I am trying to create an docker image with ubutu 16.04 as base. I want to install few python packages like pandas, flask etc. I have kept all packages in "requirements.txt". But when I am trying to build image, I am getting
Could not find a version that satisfies the requirement requests (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for requests (from -r requirements.txt (line 1))
Basically, I have not mentioned any version in "requirements.txt". I guess it should take the latest available and compatible version of that package. But for every package same issue I am getting.
My DockerFile is as follows.
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev build-essential cmake pkg-config libx11-dev libatlas-base-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /testing/requirements.txt
WORKDIR /testing
RUN pip3 install -r requirements.txt
and requirements.txt is.
pandas
requests
PyMySQL
Flask
Flask-Cors
Pillow
face-recognition
Flask-SocketIO
Where I am doing wrong ? Can anybody help ?
I too ran into the same situation. I observed that, python packages is looking for the network within docker. It is thinking that, it is running in a standalone without network so its not able to locate the package. In these type of situations either
No matching distribution found
or sometimes
Retrying ...
error may occur.
I used a --network option in the docker build command like below to overcome this error where the command insists python to use the host network to download the required packages.
docker build --network=host -t tracker:latest .
Try using this:
RUN python3.6 -m pip install --upgrade pip \
&& python3.6 -m pip install -r requirements.txt
by using it in this way, you are specifying the version of python in which you want to search for those packages.
Change it to python3.7 if you wish to use 3.7 version.
I suggest using the official python image instead. As a result, your Dockerfile will now become:
FROM python:3
WORKDIR /testing
COPY ./requirements.txt /testing/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
... etc ...
Now re: Angular/Node. You have two options from here: 1) Install Angular/Node on the Python image; or 2) Use Docker's multi-stage build feature so you build the Angular and Python-specific images before merging them together. Option 2 is recommended but it would take some work. It would probably look like this:
FROM node:8 as node
# Angular-specific build
FROM python:3 as python
# Python-specific build
# Then copy your data from the Angular image to the Python one:
COPY --from=node /usr/src/app/dist/angular-docker /usr/src/app
Installing grpcio-reflection with pip takes a really long time.
It is strange because the pip package is only 8KB in PyPI but downloading takes more than a minute while other packages that are in the megabytes are downloaded really fast.
UPDATE:
It was not downloading, there is a lot of compilation going on. It seems to be that the feature is still in alpha so the package is not precompiled like standard grpcio
UPDATE2: Repro steps
I have just opened an issue here: https://github.com/grpc/grpc/issues/12992 and I am copying the repro steps here for completion.
It seems that grpci-reflection package installation freezes depending on other packages in the same command line
This can be easily reproduced by these two docker different containers:
Dockerfile.fast - Container creation time ~1m 23s
#Download base ubuntu image
FROM ubuntu:16.04
RUN apt-get update && \
apt-get -y install ca-certificates curl
# Prepare pip
RUN apt-get -y install python-pip
RUN pip install -U pip
RUN pip install grpcio grpcio-tools
RUN pip install grpcio-reflection # Two lines is FAST
Dockerfile.slow - Container creation time 5m 20s
#Download base ubuntu image
FROM ubuntu:16.04
RUN apt-get update && \
apt-get -y install ca-certificates curl
# Prepare pip
RUN apt-get -y install python-pip
RUN pip install -U pip
RUN pip install grpcio grpcio-tools grpcio-reflection # Single line is SLOW
Timing containers build time:
time docker build --rm --no-cache -f Dockerfile.fast -t repro_reflbug_fast:latest .
......
real 1m22.295s
user 0m0.060s
sys 0m0.040s
time docker build --rm --no-cache -f Dockerfile.slow -t repro_reflbug_slow:latest .
.....
real 6m28.290s
user 0m0.052s
sys 0m0.052s
.....
I didn't have time yet to investigate but the second case blocks for a long time while the first one doesnt.
It turns out that this issue was accepted as valid in the corresponding GitHub repo. It is now being discussed here:
https://github.com/grpc/grpc/issues/12992