How can I install Numpy and tensorflow inside a Docker image?
I'm trying to create a image of this simple Flask app:
import numpy as np
import tensorflow as tf
from flask import Flask, jsonify
print(tf.__version__)
with open('../assets/model/v1/model_architecture_V1.json', 'r') as f:
model_json = f.read()
model = tf.keras.models.model_from_json(model_json)
model.load_weights("../assets/model/v1/model_weight_V1.h5")
app = Flask(__name__)
#app.route("/api/v1", methods=["GET"])
def getPrediction():
prediction = model.predict()
return jsonify({"Fe": 3123 })
if __name__ == '__main__':
app.run(host='0.0.0.0', port=4000, debug=True)
This is my DockerFile
FROM alpine:3.10
RUN apk add --no-cache python3-dev \
&& pip3 install --upgrade pip
WORKDIR /app
COPY . /app
RUN pip3 --no-cache-dir install -r requirements.txt
CMD ["python3","src/app.py"]
And this is my requirements.txt:
Flask==1.1.2
numpy==1.18.1
tensorflow==2.0.0
When I build the image, Docker throws an error that says tensorflow and numpy cant be found.
Error:
The issue here seems to be missing libraries to build the packages from .whl files.
When creating Docker images for python which includes heavy libraries like tensorflow, I would suggest you to use the official Debian images.
Please see below Dockerfile using Debian-Buster:
FROM python:3.7.5-buster
RUN echo \
&& apt-get update \
&& apt-get --yes install apt-file \
&& apt-file update
RUN echo \
&& apt-get --yes install build-essential
ARG USER=nobody
RUN usermod -aG sudo $USER
RUN pip3 install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip3 --no-cache-dir install -r requirements.txt
USER $USER
# Using 4000 here as you used 4000 in the code. Flask has a default of 5000
EXPOSE 4000
ENTRYPOINT ["python"]
CMD ["app/app.py"]
I used below commands to build and run the docker image, and got the result at http://0.0.0.0:4000/api/v1
docker build -t tfdocker:v1 .
docker run -p 4000:4000 -t tfdocker:v1
For reference:
This was my directory structure:
├── Dockerfile
├── app
│ └── app.py
└── requirements.txt
1 directory, 3 files
Content of requirements.txt were:
Flask==1.1.2
numpy==1.18.4
tensorflow==2.2.0
Hope this helps!
Related
I am trying to host a simple Dash app on a Raspberry Pi in a Docker environment. The app runs as expected locally, but when building for ARMv6 and pulling the image, I get the following error (shortened for brevity):
ImportError: Unable to import required dependencies:
numpy:
[...]
Original error was: No module named 'numpy.core._multiarray_umath'
I build the app for ARMv6 and push it to my DockerHub repository using the following command:
docker buildx build --platform linux/arm/v6 -t <username>/<repo-name>:<tag> --push .
The image is then downloaded using docker-compose pull into the Docker environment on the Raspbery Pi and started using docker-compose up
My Dockerfile contains the following:
FROM python:3.9
WORKDIR /app
COPY requirements.txt requirements.txt
# Problems with brotli from piwheels means it needs to be downloaded from another repo.
RUN pip3 install --extra-index-url https://www.piwheels.org/simple --no-cache-dir -r requirements.txt \
&& pip3 uninstall -y brotli \
&& pip3 install --no-cache-dir brotli
COPY . .
CMD [ "python", "app.py" ]
Requirements.txt:
numpy==1.21.4
pandas==1.3.5
dash==2.0.0
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-bootstrap-components==1.0.2
dash-table==5.0.0
plotly==5.3.1
mysql-connector-python==8.0.27
dash-bootstrap-templates==1.0.4
I am trying to install rabbitmq (pika) driver in my python container, but in local deployment, there is no problem.
FROM ubuntu:20.04
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN apt-get update && apt-get -y install gcc python3.7 python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python","index.py"]
this is my requerments.txt file :
requests
telethon
Flask
flask-mongoengine
Flask_JWT_Extended
Flask_Bcrypt
flask-restful
flask-cors
jsonschema
werkzeug
pandas
xlrd
Kanpai
pika
Flask-APScheduler
docker build steps complete with no error and install all the dependencies with no error but when I try to run my container it crashes with this error :
no module named 'pika'
installing python3.7 will not work here, you are still using python3.8 by using pip3 command and your CMD will also start python3.8, I suggest you to use python:3.7 base image
so try this:
FROM python:3.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN apt-get update && apt-get -y install gcc
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
CMD ["python","index.py"]
I have a Python script that requires OpenCV's haarcascade.xml file to make a facial recognition. I successfully built and pushed Docker image on Kubernetes (Google Cloud Platform) with the following Dockerfile:
FROM python:3.7
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN apt-get update && apt-get install -y sudo && rm -rf /var/lib/apt/lists/*
RUN sudo apt-get update && sudo apt-get install -y apt-utils build-essential cmake && sudo apt-get install -y libgtk-3-dev && sudo apt-get install -y libboost-all-dev && sudo apt-get install -y default-libmysqlclient-dev
RUN pip install setuptools mysqlclient cmake Flask gunicorn pybase64 google-cloud-vision google-cloud-storage protobuf nltk fuzzywuzzy PyPDF2 numpy python-csv google-cloud-language pandas SQLAlchemy PyMySQL pytz Unidecode torch tensorflow==1.15 transformers imutils scikit-learn scikit-image scipy==1.4.1 opencv-python text2num sklearn
RUN pip install dlib
COPY app.py ./app.py
COPY haarcascade_frontalface_default.xml ./haarcascade_frontalface_default.xml
CMD ["python", "./app.py"]
In the Python notebook I have haarcascade.xml file without a path:
def face_recog(image):
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
img = cv2.imread(image)
The pod runs smoothly, no errors, no restarts, but this part of code is not working due to the "missing" haarcascade file.
I know I have two options:
Put the full path of haarcascade in the notebook (which I don't know given is a Docker image)
Fix the COPY haarcascade.xml file in Dockerfile so that Python script finds it.
Any help is appreciated.
I'm getting the following error when executing
docker-compose up --build
web_1 | Traceback (most recent call last):
web_1 | File "glm-plotter.py", line 4, in <module>
web_1 | from flask import Flask, render_template, request, session
web_1 | ModuleNotFoundError: No module named 'flask'
glm-plotter_web_1 exited with code 1
I tried changing "Flask" to "flask" in the requirements.txt
Dockerfile
FROM continuumio/miniconda3
RUN apt-get update && apt-get install -y python3
RUN apt-get install -y python3-pip
RUN apt-get install -y build-essential
COPY requirements.txt /
RUN pip3 install --trusted-host pypi.python.org -r /requirements.txt
ADD ./glm-plotter /code
WORKDIR /code
RUN ls .
CMD ["python3", "glm-plotter.py"]
docker-compose.yml
version: "3"
services:
web:
volumes:
- ~/.aws:/root/.aws
build: .
ports:
- "5000:5000"
requirements.txt
click==6.6
Flask==0.11.1
itsdangerous==0.24
Jinja2==2.8
MarkupSafe==0.23
numpy==1.11.1
pandas==0.18.1
python-dateutil==2.5.3
pytz==2016.4
six==1.10.0
Werkzeug==0.11.10
glm-plotter.py
from flask import Flask, render_template, request, session
import os, json
import GLMparser
...
I created a Docker and it compiles fine. You might want to adapt it to your personal needs and add the last few lines from your Dockerfile above:
FROM library/python:3.6-stretch
COPY requirements.txt /
RUN pip install -r /requirements.txt
Please note that in the library/python images no explicit version number for python or pip is required since there is only one installed.
If you use miniconda image you have to create a new environment and activate it prior to installing the packages and run the program in your docker file. Something like:
FROM continuumio/miniconda3
RUN apt-get update && apt-get install -y python3
RUN apt-get install -y python3-pip
RUN apt-get install -y build-essential
COPY requirements.txt /
RUN ["conda", "create", "-n", "myenv", "python=3.4"]
RUN /bin/bash -c "source activate myenv && pip install --trusted-host pypi.python.org -r /requirements.txt"
ADD ./glm-plotter /code
WORKDIR /code
RUN ls .
CMD /bin/bash -c "source activate myenv && python glm-plotter.py"
I am having lots of problems creating a Docker container using python:3.6-alpine for Plotly. Plotly also uses Pandas and Numpy. When I run my Dockerfile below, the "RUN venv/bin/pip install -r requirements.txt" fails. Anyone have recommendations for this, am I missing requirements?
FROM python:3.6-alpine
RUN adduser -D visualdata
RUN pip install --upgrade pip
WORKDIR /home/visualdata
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn
#RUN venv/bin/pip install install python3-pymysql
COPY app app
COPY migrations migrations
COPY visualdata.py config.py boot.sh ./
RUN chmod a+x boot.sh
ENV FLASK_APP visualdata.py
RUN chown -R visualdata:visualdata ./
USER visualdata
EXPOSE 8000
ENTRYPOINT ["./boot.sh"]
If you look at the Python docker image official repository, there is a Dockerfile example that illustrates the pip step:
RUN pip install --no-cache-dir -r requirements.txt
You should be able to use pip directly instead of venv/bin/pip.
You do not really need to use a virtualenv in a docker container if you are only running one application inside. The container already provides its own isolated environment.