I'm trying to install several R packages in a Python Docker image. I've this small Dockerfile:
# Python 3.7.5
FROM python:3.7.5
ENV PYTHONUNBUFFERED 1
# Install R 3.6
RUN echo 'deb http://cran.rstudio.com/bin/linux/debian buster-cran35/' >> /etc/apt/sources.list
RUN apt install dirmngr
RUN apt-key adv --keyserver keys.gnupg.net --recv-key 'E19F5F87128899B192B1A2C2AD5F960A256A04AF'
RUN apt update
RUN apt install -y r-base
# Install R dependencies
RUN R -e "install.packages('BiocManager', dependencies=TRUE, repos='http://cran.rstudio.com/')"
It doesn't throw any error. But when I run docker container exec -it <my container> bash and do:
Rscript -e 'installed.packages()' | grep BiocManager
There aren't any results. I don't know if this applies, but during building it throws:
The downloaded source packages are in
'/tmp/Rtmp7jBLWQ/downloaded_packages'
Maybe that it's installing the packages on a temp folder is the problem. Is there any way to install R packages without making an image base on R-base image and use install2.r?
What a shame... I had a docker-compose with a build: . clause and I forget to run docker-compose build. That's why the changes didn't apply.
However I used the r-base image to install the dependies in an easier way. My final Dockerfile is:
FROM r-base:3.6.2
# Install R dependencies
RUN install2.r --error BiocManager
# Install Python 3.7
RUN apt install -y python3.7 python3-pip
Related
I want to create a docker image (docker version: 20.10.20)that contains python libraires from a requirement.txt file that contains 50 libraries. Without facing root user permissions how can proceed. Here is the file:
From ubuntu:latest
RUN apt update
RUN apt install python3 -y
WORKDIR /Destop/DS
# COPY requirement.txt ./
# RUN pip install -r requirement.txt
# it contains only pandas==1.5.1
COPY script2.py ./
CMD ["python3", "./script2.py"]
It failed at requiremnt.txt command
*error it takes lot of time while creating image.
because it ask for root permission.
For me the only problem in your Dockerfile is in the line RUN apt install python -y. This is erroring with Package 'python' has no installation candidate.
It is expected since python refers to version 2.x of Python wich is deprecated and no longer present in the default Ubuntu repositories.
Changing your Dockerfile to use Python version 3.x worked fine for me.
FROM ubuntu:latest
RUN apt update
RUN apt install python3 python3-pip -y
WORKDIR /Destop/DS
COPY requirement.txt ./
RUN pip3 install -r requirement.txt
COPY script2.py ./
CMD ["python3", "./script2.py"]
To test I used requirement.txt
pandas==1.5.1
and script2.py
import pandas as pd
print(pd.__version__)
With this building the docker image and running a container from it executed succesfully.
docker build -t myimage .
docker run --rm myimage
I have a python script that uses DigitalOcean tools (doctl and kubectl) I want to containerize. This means my container will need python, doctl, and kubectl installed. The trouble is, I figure out how to install both python and DigitalOcean tools in the dockerfile.
I can install python using the base image "python:3" and I can also install the DigitalOcean tools using the base image "alpine/doctl". However, the rule is you can only use one base image in a dockerfile.
So I can include the python base image and install the DigitalOcean tools another way:
FROM python:3
RUN <somehow install doctl and kubectl>
RUN pip install firebase-admin
COPY script.py
CMD ["python", "script.py"]
Or I can include the alpine/doctl base image and install python3 another way.
FROM alpine/doctl
RUN <somehow install python>
RUN pip install firebase-admin
COPY script.py
CMD ["python", "script.py"]
Unfortunately, I'm not sure how I would do this. Any help in how I can get all these tools installed would be great!
just add this with any other thing you want to apt-get install:
RUN apt-get update && apt-get install -y \
python3.6 &&\
python3-pip &&\
in alpine it should be something like:
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python &&\
python3 -m ensurepip &&\
pip3 install --no-cache --upgrade pip setuptools &&\
This Dockerfile worked for me:
FROM alpine/doctl
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools
This answer comes from here:(https://stackoverflow.com/a/62555259/7479816; I don't have enough street cred to comment)
You can try multi-stage build as shown below.
Also check your copy statement, you need to define where you want script.py file to be copied as second parameter. "." will copy it to root directory
FROM alpine/doctl
FROM python:3.6-slim-buster
ENV PYTHONUNBUFFERED 1
RUN pip install firebase-admin
COPY script.py .
CMD ["python", "script.py"]
I found a base firefox standalone image, I am trying to run a script using selenium with geckodriver inside a docker container, I've tried to install requirements from the dockerfile but get ModuleNotFoundError: No module named 'selenium'
This is my Dockerfile:
From selenium/node-firefox:3.141.59-iron
# Set buffered environment variable
ENV PYTHONUNBUFFERED 1
# Set the working directory to /app
USER root
RUN mkdir /app
WORKDIR /app
EXPOSE 80
# Install packacges needed for crontab and selenium
RUN apt-get update && apt-get install -y sudo libpulse0 pulseaudio software-properties-common libappindicator1 fonts-liberation python-pip virtualenv
RUN apt-get install binutils libproj-dev gdal-bin cron nano -y
# RUN virtualenv -p /usr/bin/python3.6 venv36
# RUN . venv36/bin/activate
# Install any needed packages specified in requirements.txt
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
ENTRYPOINT ["/bin/bash"]
I am expecting my script to run, I'm not sure why there it is using Python2.7 in the interactive shell, I thought the selenium docker image came with 3.6 and selenium already installed
Your container comes with both python (python 2) and python3. It's just python defaults to the 2.7 instance. You can change this behavior by issuing:
RUN alias python=python3
in your Dockerfile
We are trying to create a Docker container for a python application. The Dockerfile installs dependencies using "pip install". The Dockerfile looks like
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y git wget python3-pip
RUN mkdir /app
COPY . /app
RUN pip3 install asn1crypto
RUN pip3 install cffi==1.10.0
RUN pip3 install click==6.7
RUN pip3 install conda==4.3.16
RUN pip3 install Flask==0.12.2
RUN pip3 install Flask-SSLify==0.1.5
RUN pip3 install Flask-SSLify==0.1.5
RUN pip3 install flask-restful==0.3.6
WORKDIR /app
ENTRYPOINT ["python3"]
CMD [ "X.py", "/app/Y.yml" ]
The docker gets created successfully the issue is on the rebuild time.
If nothing is changed in the dockerfile above
If a line is changed in the dockerfile which is after pip install the docker daemon still runs all the commands in pip install, downloading all the packages though not installing them.
Is there a way to optimize the rebuild?
Thx
Below is what i would like to do momentarily with the Dockerfile for optimization -
FROM ubuntu:latest
RUN apt-get update -y && apt-get install -y \
git \
wget \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY ./requirements.txt .
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT ["python3"]
CMD [ "X.py", "/app/Y.yml" ]
Reduce the layers by integrating multiple commands into a single one specifically when they are interdependent. This helps reducing the image size.
Always try to use the COPY at the end since a regular source code change may invalidate the next layer caching.
Use a single requirements.txt file for installation through pip. Also define separate steps in case you have lots of packages to install, don't let a normal source code change force packages installation on every build.
Always cleanup the intermediate things which are not required in the final image.
Ref- https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
I've found this which walks you through creating a base bare-metal centos image. I want to however install some additional yum packages, download Python 2.7.8 and build it.
I had this in a dockerfile and already working like:
# Set the base image to Ubuntu
FROM centos:7
# File Author / Maintainer
MAINTAINER Sam Mohamed
# Update the sources list
RUN yum -y update
RUN yum install -y zlib-dev openssl-devel sqlite-devel bzip2-devel xz-libs gcc g++ build-essential make
# Install Python 2.7.8
RUN curl -o /root/Python-2.7.9.tar.xz https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tar.xz
RUN tar -xf /root/Python-2.7.9.tar.xz -C /root
RUN cd /root/Python-2.7.9 && ./configure --prefix=/usr/local && make && make altinstall
# Copy the application folder inside the container
ADD `pwd` /opt/iws_project
# Download Setuptools and install pip and virtualenv
RUN wget https://bootstrap.pypa.io/ez_setup.py -O - | /usr/local/bin/python2.7
RUN /usr/local/bin/easy_install-2.7 pip
RUN /usr/local/bin/pip2.7 install virtualenv
# Create virtualenv and install requirements:
RUN /usr/local/bin/virtualenv /opt/iws_project/venv && source /opt/iws_project/bin/activate && pip install -r /opt/iws_project/requirements.txt
How can I convert the above into a base image?
You are probably better off building the given Dockerfile and using the resulting image as the base for future images. This is much easier to maintain and doesn't really cost anything in terms of resource use.
But if you really want to create a single-layer "base image", the steps are as follows:
Install everything you want into some directory (docker-centos-65/ in the linked tutorial).
You can modify the febootstrap command from the tutorial you linked to install additional yum packages by specifying more -i flags.
You can perform any other custom installs (e.g. Python) manually, just make sure everything ends up in the same root directory
Create a tar archive of the directory where everything is installed, and pipe this to the docker import command:
tar c -C docker-centos-65/ . | docker import - my-base-image