Unable to install R 4.1.3 in fastapi Docker image - python

I'm using a dockerfile which uses tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim-2021-06-09 as base image and installs the required linux package and also installs r-recommended and r-base. Earlier below dockerfile works fine. But when I tried to update the image with tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim-2023-01-09 as base image, unable to install the r-recommended and r-base with version 4.1.2-1~bustercran.0.
Dockerfile :
# Download IspR from IspR project pipeline and extract the folder and rename it as r-scripts. Place the r-scripts directory in backend root directory.
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim-2023-01-09
COPY key_gnupg.gpg /app/key_gnupg.gpg
COPY packages.txt /app/packages.txt
RUN echo "Acquire::Check-Valid-Until \"false\";\nAcquire::Check-Date \"false\";" | cat > /etc/apt/apt.conf.d/10no--check-valid-until
RUN apt-get update && apt-get install -y gnupg2=2.2.27-2+deb11u2 && \
echo "deb http://cloud.r-project.org/bin/linux/debian buster-cran40/" >> /etc/apt/sources.list.d/cran.list && \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B8F25A8A73EACF41 && \
apt-get update
RUN apt-get install --no-install-recommends -y $(cat /app/packages.txt) && \
rm -rf /var/lib/apt/lists/* && \
apt-get purge --auto-remove && \
apt-get clean
COPY . /app
WORKDIR /app/r-scripts
RUN R -e "install.packages('remotes')"
RUN renv_version=`cat renv.lock | grep -A3 "renv" | grep -e "Version" | cut -d ':' -f2 | sed "s/\"//g" | sed "s/\,//g"|awk '{$1=$1};1'` && \
R -e "remotes::install_github('rstudio/renv#${renv_version}')" && \
rm -rf /app
CMD ["/bin/bash"]
packages.txt
git=1:2.20.1-2+deb10u3
pkg-config=0.29-6
liblapack-dev=3.8.0-2
gfortran=4:8.3.0-1
libxml2=2.9.4+dfsg1-7+deb10u3
libxml2-dev=2.9.4+dfsg1-7+deb10u3
libssl-dev=1.1.1n-0+deb10u1
libcurl4-openssl-dev=7.64.0-4+deb10u2
libnlopt-dev=2.4.2+dfsg-8+b1
libpcre2-8-0=10.32-5
build-essential=12.6
r-recommended=4.1.2-1~bustercran.0
r-base=4.1.2-1~bustercran.0
curl=7.64.0-4+deb10u2
postgresql=11+200+deb10u4
libpq-dev=11.14-0+deb10u1
libblas-dev=3.8.0-2
libgomp1=8.3.0-6
zlib1g-dev=1:1.2.11.dfsg-1
zlib1g=1:1.2.11.dfsg-1
Error MEssage :
E: Version '4.1.2-1~bustercran.0' for 'r-base-core' was not found
How to install the 4.1.2 version of r-base using this Dockerfile?

Related

How to create Dockerfile with R + Anaconda3 + non-root User

I need to create a Dockerfile that emulates a normal workspace.
We have a virtual machine where we train models.
We Use R and Python3.
I want to automate some of the processes without changing the codebase.
e.g. ~ must point to a /home/<some user>
Biggest problem is Anaconda3 in docker. because every RUN is a standalone login.
Basis for my answer: https://github.com/xychelsea/anaconda3-docker/blob/main/Dockerfile
I've created my own mini R package installer:
install_r_packages.sh
#!/bin/bash
input="r-requirements.txt"
Rscript -e "install.packages('remotes')"
IFS='='
while IFS= read -r line; do
read -r package version <<<$line
package=$(echo "$package" | sed 's/ *$//g')
version=$(echo "$version" | sed 's/ *$//g')
if ! [[ ($package =~ ^#.*) || (-z $package) ]]; then
Rscript -e "remotes::install_version('$package', version = '$version')"
fi
done <$input
r-requirement
# packages for rmarkdown
htmltools=0.5.2
jsonlite=1.7.2
...
rmarkdown=2.11
# more packages
...
Dockerfile
FROM debian:bullseye
RUN apt-get update
# install R
RUN apt-get install -y r-base r-base-dev libatlas3-base r-recommended libssl-dev openssl \
libcurl4-openssl-dev libfontconfig1-dev libxml2-dev xml2 pandoc lua5.3 clang
ENV ARROW_S3=ON \
LIBARROW_MINIMAL=false \
LIBARROW_BINARY=true \
RSTUDIO_PANDOC=/usr/lib/rstudio-server/bin/pandoc \
TZ=Etc/UTC
COPY r-requirements.txt .
COPY scripts/install_r_packages.sh scripts/install_r_packages.sh
RUN bash scripts/install_r_packages.sh
# create user
ENV REPORT_USER="reporter"
ENV PROJECT_HOME=/home/${REPORT_USER}/<project>
RUN useradd -ms /bin/bash ${REPORT_USER} \
&& mkdir /data \
&& mkdir /opt/mlflow \
&& chown -R ${REPORT_USER}:${REPORT_USER} /data \
&& chown -R ${REPORT_USER}:${REPORT_USER} /opt/mlflow
# copy project files
WORKDIR ${PROJECT_HOME}
COPY src src
... bla bla bla ...
COPY requirements.txt .
RUN chown -R ${REPORT_USER}:${REPORT_USER} ${PROJECT_HOME}
# Install python Anaconda env
ENV ANACONDA_PATH="/opt/anaconda3"
ENV PATH=${ANACONDA_PATH}/bin:${PATH}
ENV ANACONDA_INSTALLER=Anaconda3-2021.11-Linux-x86_64.sh
RUN mkdir ${ANACONDA_PATH} \
&& chown -R ${REPORT_USER}:${REPORT_USER} ${ANACONDA_PATH}
RUN apt-get install -y wget
USER ${REPORT_USER}
RUN wget https://repo.anaconda.com/archive/${ANACONDA_INSTALLER} \
&& /bin/bash ${ANACONDA_INSTALLER} -b -u -p ${ANACONDA_PATH} \
&& chown -R ${REPORT_USER} ${ANACONDA_PATH} \
&& rm -rvf ~/${ANACONDA_INSTALLER}.sh \
&& echo ". ${ANACONDA_PATH}/etc/profile.d/conda.sh" >> ~/.bashrc \
&& echo "conda activate base" >> ~/.bashrc
RUN pip3 install --upgrade pip \
&& pip3 install -r requirements.txt \
&& pip3 install awscli
# run training and report
ENV PYTHONPATH=/home/${REPORT_USER}/<project> \
MLFLOW_TRACKING_URI=... \
MLFLOW_EXPERIMENT_NAME=...
CMD dvc config core.no_scm true \
&& dvc repro

Azure functions using python docker file

I have followed the tutorial for the Azure Functions using python.
everything wen smooth.
for the next step I need to add a C compiled dependency.
I just added the C compiler + the dependency script rows.
I have edited the Docker file and it now looks like this:
FROM mcr.microsoft.com/azure-functions/python:3.0-python3.7
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY requirements.txt /
RUN pip install -r /requirements.txt
COPY . /home/site/wwwroot
FROM julia:1.3
RUN apt-get update && apt-get install -y gcc g++ && rm -rf /var/lib/apt/lists/*
FROM python:3.7
RUN pip install numpy
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
When I build this docker file it look good.
but when I run it it just opens up a GCC promp.
What am I doing wrong?
Thanks
I found an issue with your multi stage FROM statements. Also, you needed to add apt-get install make.
The following works:
FROM mcr.microsoft.com/azure-functions/python:3.0-python3.7
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY requirements.txt /
RUN pip install -r /requirements.txt
COPY . /home/site/wwwroot
# Adding "apt-get install make" here
RUN apt-get update && apt-get install make && apt-get install -y gcc g++ && rm -rf /var/lib/apt/lists/*
RUN pip install numpy
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz

The command returned a non-zero code: 2 (docker)

Build failure with a non-zero code: 2
The docker file is provided below:
FROM ubuntu:bionic
RUN \
apt-get update \
&& apt-get install -y -q curl gnupg \
&& curl -sSL 'http://p80.pool.sks-keyservers.net/pks/lookup?op=get&search=0x8AA7AF1F1091A5FD' | apt-key add - \
&& echo 'deb [arch=amd64] http://repo.sawtooth.me/ubuntu/chime/stable bionic universe' >> /etc/apt/sources.list \
&& apt-get update
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get install -y --allow-unauthenticated -q \
python3-pip \
python3-sawtooth-sdk \
python3-sawtooth-rest-api \
python3-sawtooth-cli \
cron-apt \
curl
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - \
&& apt-get install -y nodejs
RUN pip3 install \
pylint \
pycodestyle \
grpcio-tools==1.29.0 \
nose2 \
bcrypt \
pycrypto \
rethinkdb \
sanic \
swagger-ui-py \
itsdangerous
WORKDIR /project/sawtooth-marketplace
COPY sawbuck_app/package.json /project/sawtooth-marketplace/sawbuck_app/
RUN cd sawbuck_app/ && npm install
ENV PATH $PATH:/project/sawtooth-marketplace/bin
# Note that the context must be set to the project's root directory
COPY . .
RUN market-protogen
The following is the LOG from the server that is recorded, I don't know why the build is failing, can anybody guide please?
Service 'market-shell' failed to build: The command '/bin/sh -c apt-get update && apt-get install -y -q curl gnupg
&& curl -sSL 'http://p80.pool.sks-keyservers.net/pks/lookup?op=get&search=0x8AA7AF1F1091A5FD' | apt-key add -
&& echo 'deb [arch=amd64] http://repo.sawtooth.me/ubuntu/chime/stable bionic universe'>> /etc/apt/sources.list
&& apt-get update' returned a non-zero code: 2
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Syslog-ng with Python 3.7 in Docker

I am using the following Dockerfile to create an image with Python 3.7.4 and Syslog-ng:
FROM python:3.7.4
RUN apt-get update -qq && apt-get install -y \
wget \
gnupg2
RUN wget -qO - https://download.opensuse.org/repositories/home:/laszlo_budai:/syslog-ng/Debian_9.0/Release.key | apt-key add -
RUN echo 'deb http://download.opensuse.org/repositories/home:/laszlo_budai:/syslog-ng/Debian_9.0 ./' | tee --append /etc/apt/sources.list.d/syslog-ng-obs.list
RUN apt-get update -qq && apt-get install -y \
syslog-ng
COPY ./out.log /out.log
COPY ./syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
RUN find /usr/lib/ -name 'libjvm.so*' | xargs dirname | tee --append /etc/ld.so.conf.d/openjdk-libjvm.conf
RUN ldconfig
EXPOSE 514/udp
EXPOSE 601/tcp
EXPOSE 6514/tcp
ENTRYPOINT ["/usr/sbin/syslog-ng", "-F"]
However, I want to use Python 3.7.4 in my syslog-ng.conf and syslog-ng is using Python 2.7.
How can I change to Python 3?
Edit: Solution by MrAnno
Compile and configure with python3:
RUN cd /syslog && \
./configure --with-python=3 --enable-ssl --enable-systemd --enable-debug && \
make && make install
RUN ldconfig
Currently (v3.22.1), all syslog-ng packages in the home:/laszlo_budai:/syslog-ng repository are compiled with Python 2. It can't be changed, you have to recompile syslog-ng from source with the --with-python=3 configure flag specified.

Create Docker container with Nodev4.4.7 and Python3

Trying to create a docker image that has Python3 and Node v4.4.7 so that I can use it as a container for my project that needs both Python and that version of Node.
# Pull base image.
FROM python:3-onbuild
CMD [ "python", "./hello.py" ]
# Install Node.js
RUN \
cd /tmp && \
wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && \
tar xvzf node-v4.4.7.tar.gz && \
rm -f node-v4.4.7.tar.gz && \
cd node-v* && \
./configure && \
CXX="g++ -Wno-unused-local-typedefs" make && \
CXX="g++ -Wno-unused-local-typedefs" make install && \
cd /tmp && \
rm -rf /tmp/node-v* && \
npm install -g npm && \
print '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
When I first tried it complained about not having a python script to run so added a basic python file: hello.py
that just has this:
print "Hello, Python!"
Then it complains about not having a requirements.txt file so added an empty requirements.txt
Now when I run docker build -t isaacweathersnet/sampledockerimage . it snafus during the node install with
node-v4.4.0/benchmark/arrays/zero-int.js
File "./configure", line 446
'''
^
SyntaxError: Missing parentheses in call to 'print'
The command '/bin/sh -c cd /tmp && wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && tar xvzf node-v4.4.7.tar.gz && rm -f node-v4.4.7.tar.gz && cd node-v* && ./configure && CXX="g++ -Wno-unused-local-typedefs" make && CXX="g++ -Wno-unused-local-typedefs" make install && cd /tmp && rm -rf /tmp/node-v* && npm install -g npm && print '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc' returned a non-zero code: 1
Found the solution on Github that had Python and Node. No luck with Python 3+ but worked well with 2.7
https://github.com/nsdont/python-node/blob/master/Dockerfile
FROM python:2.7
RUN \
cd /tmp && \
wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && \
tar xvzf node-v4.4.7.tar.gz && \
rm -f node-v4.4.7.tar.gz && \
cd node-v* && \
./configure && \
CXX="g++ -Wno-unused-local-typedefs" make && \
CXX="g++ -Wno-unused-local-typedefs" make install && \
cd /tmp && \
rm -rf /tmp/node-v* && \
npm install -g npm && \
echo -e '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
There are nodejs-python and python-nodejs (which is built on top of nodejy-python). It's worth to have a look into there.
python-nodejs provides Node 10.x, npm 6.x, yarn stable, Python latest, pip latest and pipenv latest. The versions used should be adjustable to your version needs. Use the Dockerfile as basis and adjust the RUN section
RUN \
echo "deb https://deb.nodesource.com/node_10.x stretch main" > /etc/apt/sources.list.d/nodesource.list && \
wget -qO- https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list && \
wget -qO- https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
apt-get update && \
apt-get install -yqq nodejs yarn && \
pip install -U pip && pip install pipenv && \
npm i -g npm#^6 && \
rm -rf /var/lib/apt/lists/*
to the Node version you need. The yarn (dependency management alternative to nmp) and in case you need yarn) part can be removed.

Categories

Resources