This question already has answers here:
Activate python virtualenv in Dockerfile
(6 answers)
Closed 2 years ago.
I am trying to setup a python virtual environment on a docker image running a docker build
The terminal output is ok when I run docker build .. but when I login into my container, no packages are installer in my virtual environment.
#Dockerfile
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3.8 get-pip.py
RUN pip install virtualenv
RUN virtualenv venv
RUN /home/ubuntu/venv/bin/pip install -r auto/requirements.txt
...
# Docker build command
docker build --no-cache -t auto-server:1.0 .
# Terminal output
Step 27/27 : RUN /home/ubuntu/venv/bin/pip install -r auto/requirements.txt
---> Running in d27dbb9a4c97
Collecting asgiref==3.2.10
Downloading asgiref-3.2.10-py3-none-any.whl (19 kB)
Collecting beautifulsoup4==4.9.1
Downloading beautifulsoup4-4.9.1-py3-none-any.whl (115 kB)
Collecting Django==3.1.1
Downloading Django-3.1.1-py3-none-any.whl (7.8 MB)
...
Successfully installed Django-3.1.1 asgiref-3.2.10 beautifulsoup4-4.9.1 fake-useragent-0.1.11 joblib-0.16.0 numpy-1.19.2 pandas-1.1.2 python-dateutil-2.8.1 pytz-2020.1 scikit-learn-0.23.2 scipy-1.5.2 six-1.15.0 sklearn-0.0 soupsieve-2.0.1 sqlparse-0.3.1 threadpoolctl-2.1.0
Here is what I get when I list pakages in my virtual environment:
$ docker exec -ti auto-server bash
root#9c1f914d1b7b:/home/ubuntu# source venv/bin/activate
(venv) root#9c1f914d1b7b:/home/ubuntu# pip list
Package Version
---------- -------
pip 20.2.2
setuptools 49.6.0
wheel 0.35.1
WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
You should consider upgrading via the '/home/ubuntu/venv/bin/python -m pip install --upgrade pip' command.
Is it the right way to do it? How to make sure packages will be installed?
Finally having a virtual environment is not mandatory in a container development environment, I just setup image python environment as desired.
# Dockerfile
...
RUN python3.8 get-pip.py
RUN pip install -r auto/requirements.txt
This is not exactly what I was looking for but it does the job.
I use simple Dockerfile for dev mode
# pull official base image
FROM python:3.6-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
Related
This question already has answers here:
WARNING: Running pip as the 'root' user
(4 answers)
Closed 10 months ago.
I have this Dockerfile:
FROM python:3.8-slim
WORKDIR /app
COPY . .
RUN apt-get update
RUN apt-get install -y python3 python3-pip python3-venv
RUN pip freeze > requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python3", "main.py"]
Everything works file until this line:
RUN pip install --no-cache-dir -r requirements.txt
Using docker run --rm -it name bash and pip install -r requirements.txt then I found this error:
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting
behaviour with the system package manager. It is recommended to use a virtual environment
instead: https://pip.pypa.io/warnings/venv
Here, I found solution (which didn't work for me), that it's possible to resolve just by creating new user, but it doesn't seem to be optimal solution. How can I fix this?
In this case the problem was in version of images. Using this Dockerfile I was able to fix this:
FROM python:3.9.3
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python3", "main.py"]
PS. I don't really know if it was about it, but this images has the same python version, that I have on my computer. I could have impact on dependencies.
I have a Flask application with the following requirements.txt
chalice
matplotlib
sklearn
numpy
scipy
pandas
flask
flask_restful
and the following Dockerfile:
FROM python:3.6.1-alpine
WORKDIR /project
ADD . /project
RUN pip install -r requirements.txt
CMD ["python","app.py"]
Running the command docker image build -t clf_test .
generates the following error:
You are using pip version 9.0.1, however version 20.2.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
It seems that the matplotlib can't get installed for some reason.
Running the pip install -r requirements.txt locally doesn't produce any errors
matplotlib must be built from source, and compiling it requires a number of supporting libraries as well as a functioning C compiler. You can figure out what these are and install them so that it builds properly...
...or you can just base your Dockerfile on the non-alpine python:3.6.1 image and then apt-get install python3-matplotlib before installing your other requirements. E.g., this builds without errors:
FROM python:3.6.1
WORKDIR /project
ADD . /project
RUN apt update; apt-get -y install python3-matplotlib
RUN pip install -r requirements.txt
CMD ["python","app.py"]
I met a dependency of libxml issue, when create a docker container with python, installing dependencies lib from a ubuntu image :
# pull official base image
FROM python:3.8.0-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get install g++ gcc libxml2 libxslt-dev -y
# install dependencies
FROM python:3.8.0-alpine
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
getting this compilation output error :
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
You are installing those packages in ubuntu, but not alpine. In the builder pattern you would need to copy over the files from the builder layer into the runtime layer. However, ubuntu != alpine, so compiled binaries will not work.
You will need to leverage the apk installer to add those packages to the alpine layer:
...
RUN apk update && apk add g++ gcc libxml2 libxslt-dev
RUN python -m pip install --upgrade pip
...
Currently I am creating a virtual environment in the first stage.
Running command pip install -r requirements.txt , which install executables in /venv/bin dir.
In second stage i am copying the /venv/bin dir , but on running the python app error comes as module not found i.e i need to run pip install -r requirements.txt again to run the app .
The application is running in python 2.7 and some of the dependencies requires compiler to build . Also those dependencies are failing with alpine images compiler , and only works with ubuntu compiler or python:2.7 official image ( which in turn uses debian)
Am I missing some command in the second stage that will help in using the copied dependencies instead of installing it again .
FROM python:2.7-slim AS build
RUN apt-get update &&apt-get install -y --no-install-recommends build-essential gcc
RUN pip install --upgrade pip
RUN python3 -m venv /venv
COPY ./requirements.txt /project/requirements/
RUN /venv/bin/pip install -r /project/requirements/requirements.txt
COPY . /venv/bin
FROM python:2.7-slim AS release
COPY --from=build /venv /venv
WORKDIR /venv/bin
RUN apt-get update && apt-get install -y --no-install-recommends build-essential gcc
#RUN pip install -r requirements.txt //
RUN cp settings.py.sample settings.py
CMD ["/venv/bin/python3", "-m", "main.py"]
I am trying to avoid pip install -r requirements.txt in second stage to reduce the image size which is not happening currently.
Only copying the bin dir isn't enough; for example, packages are installed in lib/pythonX.X/site-packages and headers under include. I'd just copy the whole venv directory. You can also run it with --no-cache-dir to avoid saving the wheel archives.
insert before all
FROM yourimage:tag AS build
I'm using centos/python-36-centos7 as a base image of my application. In Dockerfile, after RUN pip install --upgrade pip, pip successfully upgrades from 9.0.1 to 18.0. Next step, at RUN pip install --no-cache-dir -r requirements.txt, docker keeps throwing error:
/bin/sh: /opt/app-root/bin/pip: /opt/app-root/bin/python3: bad interpreter: No such file or directory
The command '/bin/sh -c pip install --no-cache-dir -r requirements.txt' returned a non-zero code: 126
Operating systems: CentOS 7.2 64 bit
Docker version:18.06.0-ce, build 0ffa825
Complete Dockerfile:
FROM centos/python-36-centos7
MAINTAINER SamYu,sam_miaoyu#foxmail.com
USER root
ENV TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
COPY . /faceDetectBaseImg
COPY ./pip.conf /etc/pip.conf
WORKDIR /faceDetectBaseImg
RUN yum install -y epel-release
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
RUN rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro
RUN rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm
RUN yum install -y ffmpeg
RUN yum -y install libXrender
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
pip.conf:
[global]
trusted-host = mirrors.aliyun.com
index-url = https://mirrors.aliyun.com/pypi/simple
UPDATES:
problem fixed by removing pip install --upgrade pip and running pip 9.0.1. I am thinking it has something to do with pip 18.0 vs CentOS7 docker images. I would still like to know if there is a fix under pip 18.0.
Problems fixed completely by pulling centOS7 image and build python from source. As a reminder, don't use the latest version of centos/python-36-centos7 as of June 2018.