I am having lots of problems creating a Docker container using python:3.6-alpine for Plotly. Plotly also uses Pandas and Numpy. When I run my Dockerfile below, the "RUN venv/bin/pip install -r requirements.txt" fails. Anyone have recommendations for this, am I missing requirements?
FROM python:3.6-alpine
RUN adduser -D visualdata
RUN pip install --upgrade pip
WORKDIR /home/visualdata
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn
#RUN venv/bin/pip install install python3-pymysql
COPY app app
COPY migrations migrations
COPY visualdata.py config.py boot.sh ./
RUN chmod a+x boot.sh
ENV FLASK_APP visualdata.py
RUN chown -R visualdata:visualdata ./
USER visualdata
EXPOSE 8000
ENTRYPOINT ["./boot.sh"]
If you look at the Python docker image official repository, there is a Dockerfile example that illustrates the pip step:
RUN pip install --no-cache-dir -r requirements.txt
You should be able to use pip directly instead of venv/bin/pip.
You do not really need to use a virtualenv in a docker container if you are only running one application inside. The container already provides its own isolated environment.
Related
I want to create a docker image (docker version: 20.10.20)that contains python libraires from a requirement.txt file that contains 50 libraries. Without facing root user permissions how can proceed. Here is the file:
From ubuntu:latest
RUN apt update
RUN apt install python3 -y
WORKDIR /Destop/DS
# COPY requirement.txt ./
# RUN pip install -r requirement.txt
# it contains only pandas==1.5.1
COPY script2.py ./
CMD ["python3", "./script2.py"]
It failed at requiremnt.txt command
*error it takes lot of time while creating image.
because it ask for root permission.
For me the only problem in your Dockerfile is in the line RUN apt install python -y. This is erroring with Package 'python' has no installation candidate.
It is expected since python refers to version 2.x of Python wich is deprecated and no longer present in the default Ubuntu repositories.
Changing your Dockerfile to use Python version 3.x worked fine for me.
FROM ubuntu:latest
RUN apt update
RUN apt install python3 python3-pip -y
WORKDIR /Destop/DS
COPY requirement.txt ./
RUN pip3 install -r requirement.txt
COPY script2.py ./
CMD ["python3", "./script2.py"]
To test I used requirement.txt
pandas==1.5.1
and script2.py
import pandas as pd
print(pd.__version__)
With this building the docker image and running a container from it executed succesfully.
docker build -t myimage .
docker run --rm myimage
I am trying to create docker container for a fastapi application.
This application is going to use a private pip package hosted on github.
During local development, I used the following command to install the dependency:
pip install git+https://<ACCESS_TOKEN>:x-oauth-basic#github.com/username/projectname
I tried the same approach inside dockerfile, however without success
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
ARG ACCESS_TOKEN=default_value
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN echo "pip install git+https://${ACCESS_TOKEN}:x-oauth-basic#github.com/username/projectname"
RUN pip install --no-cache-dir --upgrade -r requirements.txt
COPY . /code
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8080"]
docker build --build-arg ACCESS_TOKEN=access_token_value .
The container builds without errors and during the build process I can see that the token is passed correctly.
However, after running the container with docker run <containerid> I get the following error:
ModuleNotFoundError: No module named 'projectname'
Have anyone tried such thing before?
Is it the correct approach?
if I am not mistaken, you could run your pip command without echo:
RUN pip install git+https://${ACCESS_TOKEN}:x-oauth-basic#github.com/username/projectname
This question already has answers here:
WARNING: Running pip as the 'root' user
(4 answers)
Closed 10 months ago.
I have this Dockerfile:
FROM python:3.8-slim
WORKDIR /app
COPY . .
RUN apt-get update
RUN apt-get install -y python3 python3-pip python3-venv
RUN pip freeze > requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python3", "main.py"]
Everything works file until this line:
RUN pip install --no-cache-dir -r requirements.txt
Using docker run --rm -it name bash and pip install -r requirements.txt then I found this error:
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting
behaviour with the system package manager. It is recommended to use a virtual environment
instead: https://pip.pypa.io/warnings/venv
Here, I found solution (which didn't work for me), that it's possible to resolve just by creating new user, but it doesn't seem to be optimal solution. How can I fix this?
In this case the problem was in version of images. Using this Dockerfile I was able to fix this:
FROM python:3.9.3
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python3", "main.py"]
PS. I don't really know if it was about it, but this images has the same python version, that I have on my computer. I could have impact on dependencies.
I rewrote some sync python lib to async. How do I integrate it into my project?
I did the following:
clone it from github and rewrited
build the lib using python3 setup.py bdist_wheel --universal and
got the file .whl file
How can I integrate it to my project?
Currently I have the following docker file:
FROM python:3.6
MAINTAINER ...
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . $APP_DIR
EXPOSE 8080
CMD python app.py
How do I copy the .whl file into to container and install it using pip3 install {...}.whl?
First add WORKDIR /app before COPY requirements.txt to specify your app working directory inside the container, then if you have xxx.whl in the same folder as requirements.txt just copy it COPY xxx.whl /app then RUN pip install xxx.whl
like this:
FROM python:3.6
MAINTAINER ...
# specify workdir
WORKDIR /app
COPY requirements.txt /app
# copy xxx.whl from host to container
COPY xxx.whl /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# install xxx.whl
RUN pip install xxx.whl
COPY . /app
EXPOSE 8080
CMD python app.py
I have a requirements.txt file containing, amongst others:
Flask-RQ==0.2
-e git+https://token:x-oauth-basic#github.com/user/repo.git#egg=repo
When I try to build a Docker container using Docker Compose, it downloads both packages, and install them both, but when I do a pip freeze there is no sign of the -e package. When I try to run the app, it looks as if this package hasn't been installed. Here's the relevant output from the build:
Collecting Flask-RQ==0.2 (from -r requirements.txt (line 3))
Downloading Flask-RQ-0.2.tar.gz
Obtaining repo from git+https://token:x-oauth-basic#github.com/user/repo.git#egg=repo (from -r requirements.txt (line 4))
Cloning https://token:x-oauth-basic#github.com/user/repo.git to ./src/repo
And here's my Dockerfile:
FROM python:2.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r requirements.txt
COPY . /usr/src/app
I find this situation very strange and would appreciate any help.
I ran into a similar issue, and one possible way that the problem can appear is from:
WORKDIR /usr/src/app
being set before pip install. pip will create the src/ directory (where the package is installed) inside of the WORKDIR. Now all of this shouldn't be an issue since your app files, when copied over, should not overwrite the src/ directory.
However, you might be mounting a volume to /usr/src/app. When you do that, you'll overwrite the /usr/src/app/src directory and then your package will not be found.
So one fix is to move WORKDIR after the pip install. So your Dockerfile will look like:
FROM python:2.7
RUN mkdir -p /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r /usr/src/app/requirements.txt
COPY . /usr/src/app
WORKDIR /usr/src/app
This fixed it for me. Hopefully it'll work for you.
#mikexstudios is correct, this happens because pip stores the package source in /usr/src/app/src, but you're mounting a local directory over top of it, meaning python can't find the package source.
Rather than changing the position of WORKDIR, I solved it by changing the pip command to:
pip install -r requirements.txt --src /usr/local/src
Either approach should work.
If you are recieving a similar error when installing a git repo from a requirements file under a dockerized container, you may have forgotten to install git.
Here is the error I recieved:
Downloading/unpacking CMRESHandler from
git+git://github.com/zigius/python-elasticsearch-logger.git (from -r
/home/ubuntu/requirements.txt (line 5))
Cloning git://github.com/zigius/python-elasticsearch-logger.git to
/tmp/pip_build_root/CMRESHandler
Cleaning up...
Cannot find command 'git'
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c useradd ubuntu -b /home && echo
"ubuntu ALL = NOPASSWD: ALL" >> /etc/sudoers &&
chown -R ubuntu:ubuntu /home/ubuntu && pip install -r /home/ubuntu/requirements.txt returned a non-zero code: 1
Here is an example Dockerfile that installs git and then installs all requirements:
FROM python:3.5-slim
RUN apt-get update && apt-get install -y --no-install-recommends git \
ADD . /code
WORKDIR /code
RUN pip install --upgrade pip setuptools && pip install -r /home/ubuntu/requirements.txt
Now you can use git packages in your requirements file in a Dockerized environment