I want to create a docker image (docker version: 20.10.20)that contains python libraires from a requirement.txt file that contains 50 libraries. Without facing root user permissions how can proceed. Here is the file:
From ubuntu:latest
RUN apt update
RUN apt install python3 -y
WORKDIR /Destop/DS
# COPY requirement.txt ./
# RUN pip install -r requirement.txt
# it contains only pandas==1.5.1
COPY script2.py ./
CMD ["python3", "./script2.py"]
It failed at requiremnt.txt command
*error it takes lot of time while creating image.
because it ask for root permission.
For me the only problem in your Dockerfile is in the line RUN apt install python -y. This is erroring with Package 'python' has no installation candidate.
It is expected since python refers to version 2.x of Python wich is deprecated and no longer present in the default Ubuntu repositories.
Changing your Dockerfile to use Python version 3.x worked fine for me.
FROM ubuntu:latest
RUN apt update
RUN apt install python3 python3-pip -y
WORKDIR /Destop/DS
COPY requirement.txt ./
RUN pip3 install -r requirement.txt
COPY script2.py ./
CMD ["python3", "./script2.py"]
To test I used requirement.txt
pandas==1.5.1
and script2.py
import pandas as pd
print(pd.__version__)
With this building the docker image and running a container from it executed succesfully.
docker build -t myimage .
docker run --rm myimage
Related
Currently I am creating a virtual environment in the first stage.
Running command pip install -r requirements.txt , which install executables in /venv/bin dir.
In second stage i am copying the /venv/bin dir , but on running the python app error comes as module not found i.e i need to run pip install -r requirements.txt again to run the app .
The application is running in python 2.7 and some of the dependencies requires compiler to build . Also those dependencies are failing with alpine images compiler , and only works with ubuntu compiler or python:2.7 official image ( which in turn uses debian)
Am I missing some command in the second stage that will help in using the copied dependencies instead of installing it again .
FROM python:2.7-slim AS build
RUN apt-get update &&apt-get install -y --no-install-recommends build-essential gcc
RUN pip install --upgrade pip
RUN python3 -m venv /venv
COPY ./requirements.txt /project/requirements/
RUN /venv/bin/pip install -r /project/requirements/requirements.txt
COPY . /venv/bin
FROM python:2.7-slim AS release
COPY --from=build /venv /venv
WORKDIR /venv/bin
RUN apt-get update && apt-get install -y --no-install-recommends build-essential gcc
#RUN pip install -r requirements.txt //
RUN cp settings.py.sample settings.py
CMD ["/venv/bin/python3", "-m", "main.py"]
I am trying to avoid pip install -r requirements.txt in second stage to reduce the image size which is not happening currently.
Only copying the bin dir isn't enough; for example, packages are installed in lib/pythonX.X/site-packages and headers under include. I'd just copy the whole venv directory. You can also run it with --no-cache-dir to avoid saving the wheel archives.
insert before all
FROM yourimage:tag AS build
I found a base firefox standalone image, I am trying to run a script using selenium with geckodriver inside a docker container, I've tried to install requirements from the dockerfile but get ModuleNotFoundError: No module named 'selenium'
This is my Dockerfile:
From selenium/node-firefox:3.141.59-iron
# Set buffered environment variable
ENV PYTHONUNBUFFERED 1
# Set the working directory to /app
USER root
RUN mkdir /app
WORKDIR /app
EXPOSE 80
# Install packacges needed for crontab and selenium
RUN apt-get update && apt-get install -y sudo libpulse0 pulseaudio software-properties-common libappindicator1 fonts-liberation python-pip virtualenv
RUN apt-get install binutils libproj-dev gdal-bin cron nano -y
# RUN virtualenv -p /usr/bin/python3.6 venv36
# RUN . venv36/bin/activate
# Install any needed packages specified in requirements.txt
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
ENTRYPOINT ["/bin/bash"]
I am expecting my script to run, I'm not sure why there it is using Python2.7 in the interactive shell, I thought the selenium docker image came with 3.6 and selenium already installed
Your container comes with both python (python 2) and python3. It's just python defaults to the 2.7 instance. You can change this behavior by issuing:
RUN alias python=python3
in your Dockerfile
I am having lots of problems creating a Docker container using python:3.6-alpine for Plotly. Plotly also uses Pandas and Numpy. When I run my Dockerfile below, the "RUN venv/bin/pip install -r requirements.txt" fails. Anyone have recommendations for this, am I missing requirements?
FROM python:3.6-alpine
RUN adduser -D visualdata
RUN pip install --upgrade pip
WORKDIR /home/visualdata
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn
#RUN venv/bin/pip install install python3-pymysql
COPY app app
COPY migrations migrations
COPY visualdata.py config.py boot.sh ./
RUN chmod a+x boot.sh
ENV FLASK_APP visualdata.py
RUN chown -R visualdata:visualdata ./
USER visualdata
EXPOSE 8000
ENTRYPOINT ["./boot.sh"]
If you look at the Python docker image official repository, there is a Dockerfile example that illustrates the pip step:
RUN pip install --no-cache-dir -r requirements.txt
You should be able to use pip directly instead of venv/bin/pip.
You do not really need to use a virtualenv in a docker container if you are only running one application inside. The container already provides its own isolated environment.
Hello im having difficult writing the Dockerfile to run a python spider script which is inside my project directory
The script file is in scrapy_estate/tutorial/tutorial/spiders/ .I think of using the COPY command ,but CMD ["python3","real_estate_spider.py"] still can't find the real_estate_spider.py file
Here is my Dockerfile
FROM ubuntu:18.04
FROM python:3.6-onbuild
RUN apt-get update &&apt-get upgrade -y&& apt-get install python-pip -y
RUN pip install --upgrade pip
RUN pip install scrapy
WORKDIR /usr/local/bin
COPY scrapy_estate/tutorial/tutorial/spiders/real_estate_spider.py
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 80
CMD ["python3","real_estate_spider.py"]
Hope someone can help me :)
It looks like you are forgetting the './' in the copy line for your script file
We are trying to create a Docker container for a python application. The Dockerfile installs dependencies using "pip install". The Dockerfile looks like
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y git wget python3-pip
RUN mkdir /app
COPY . /app
RUN pip3 install asn1crypto
RUN pip3 install cffi==1.10.0
RUN pip3 install click==6.7
RUN pip3 install conda==4.3.16
RUN pip3 install Flask==0.12.2
RUN pip3 install Flask-SSLify==0.1.5
RUN pip3 install Flask-SSLify==0.1.5
RUN pip3 install flask-restful==0.3.6
WORKDIR /app
ENTRYPOINT ["python3"]
CMD [ "X.py", "/app/Y.yml" ]
The docker gets created successfully the issue is on the rebuild time.
If nothing is changed in the dockerfile above
If a line is changed in the dockerfile which is after pip install the docker daemon still runs all the commands in pip install, downloading all the packages though not installing them.
Is there a way to optimize the rebuild?
Thx
Below is what i would like to do momentarily with the Dockerfile for optimization -
FROM ubuntu:latest
RUN apt-get update -y && apt-get install -y \
git \
wget \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY ./requirements.txt .
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT ["python3"]
CMD [ "X.py", "/app/Y.yml" ]
Reduce the layers by integrating multiple commands into a single one specifically when they are interdependent. This helps reducing the image size.
Always try to use the COPY at the end since a regular source code change may invalidate the next layer caching.
Use a single requirements.txt file for installation through pip. Also define separate steps in case you have lots of packages to install, don't let a normal source code change force packages installation on every build.
Always cleanup the intermediate things which are not required in the final image.
Ref- https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/