Installing Python 3.11.1 on a docker container - python

I want to use debian:bullseye as a base image and then install a specific Python version - i.e. 3.11.1. At the moment I am just learning docker and linux.
From what I understand I can either:
Download and compile sources
Install binaries (using apt-get)
Use a Python base image
I have come across countless questions on here and articles online. Do I use deadsnakes? What version do I need? Are there any official python distributions (who is deadsnakes anyway)?
But ultimately I want to know the best means of getting Python on there. I don't want to use a Python base image - I am curious in the steps involved. Compile sources - I am far from having that level of knowhow - and one for another day.
Currently I am rolling with the following:
FROM debian:bullseye
RUN apt update && apt upgrade -y
RUN apt install software-properties-common -y
RUN add-apt-repository "ppa:deadsnakes/ppa"
RUN apt install python3.11
This fails with:
#8 1.546 E: Unable to locate package python3.11
#8 1.546 E: Couldn't find any package by glob 'python3.11'
Ultimately - it's not the error - its just finding a good way of getting a specific Python version on my container.

In case you want to install Python 3.11 in debian bullseye you have to compile it from source following the next steps (inside the Dockerfile):
sudo apt update
sudo apt install software-properties-common wget
wget https://www.python.org/ftp/python/3.11.1/Python-3.11.1.tar.xz
sudo tar -xf Python-3.11.1.tar.xz
cd Python-3.11.1
sudo ./configure --enable-optimizations
sudo make altinstall
Another option (easiest) would be to use the official Python Docker image, in your case:
FROM 3.11-bullseye
You have all the versions available in docker hub.
Other option that could be interesting in your case is 3.11-slim-bullseye, that is an image that does not contain the common packages contained in the default tag and only contains the minimal packages needed to run python.

Based on #tomasborella answer, to do this in docker:
Dockerfile
FROM debian:bullseye
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get -y install build-essential \
zlib1g-dev \
libncurses5-dev \
libgdbm-dev \
libnss3-dev \
libssl-dev \
libreadline-dev \
libffi-dev \
libsqlite3-dev \
libbz2-dev \
wget \
&& export DEBIAN_FRONTEND=noninteractive \
&& apt-get purge -y imagemagick imagemagick-6-common
RUN cd /usr/src \
&& wget https://www.python.org/ftp/python/3.11.0/Python-3.11.0.tgz \
&& tar -xzf Python-3.11.0.tgz \
&& cd Python-3.11.0 \
&& ./configure --enable-optimizations \
&& make altinstall
RUN update-alternatives --install /usr/bin/python python /usr/local/bin/python3.11 1
update-alternatives - will update the links to allow you to run python as opposed to specifying python3.11 when you want to run it.
It takes a while to compile those sources!

Related

Python Version in Docker

I am using Windows in my local machine and i use Spyder 5.1.5 in Anaconda. Within Spyder, the python version 3.9.7; IPython version is 7.29.0.
When i ran the code below with my local machine, I never ran into the problem below.
Problem:
I installed the same version of python in docker (Python 3.9.7 from here).
This is what the dataframe df_ts looks like
0
event123 2019-04-01 09:30:00.635
event000 2019-04-01 09:32:56.417
df_ts.dtypes
0 datetime64[ns]
dtype: object
When i tried to run the line below within docker
df_ts.idxmin(axis=0).values[0]
I got the error below. I am expecting it to return the index of min here. Note that I never got any error, if I run it within my local machine not docker.
I am starting to wonder if the python version 3.9.7 I installed in docker is the same as the python 3.9.7 in Spyder.
TypeError: reduction operation 'argmin' not allowed for this dtype
This is how my dockerfile looks like:
FROM centos:latest
RUN dnf --disablerepo '*' --enablerepo=extras swap centos-linux-repos centos-stream-repos -y && \
dnf distro-sync -y
RUN yum -y install epel-release && \
yum -y update && \
yum groupinstall "Development Tools" -y && \
yum install openssl-devel libffi-devel bzip2-devel -y
RUN yum install wget -y && \
wget https://www.python.org/ftp/python/3.9.7/Python-3.9.7.tgz && \
tar xvf Python-3.9.7.tgz && \
cd Python-3.9*/ && \
./configure --enable-optimizations && \
make altinstall
RUN ln -s /usr/local/bin/python3.9 /usr/local/bin/python && \
ln -s /usr/local/bin/pip3.9 /usr/local/bin/pip3
ARG USER=centos
ARG V_ENV=boto3venv
ARG VOLUME=/home/${USER}/app-src

How install PIP & PYMSSQL in docker

I have a Python program which is to be executed in the Azure Kubernetes.
Below is my docker file - I have Python installed
#Ubuntu Base image with openjdk8 with TomEE
FROM demo.azurecr.io/ubuntu/tomee/openjdk8:8.0.x
RUN apt-get update && apt-get install -y telnet && apt-get install -y ksh && apt-get install -y python2.7.x && apt-get -y clean && rm -rf /var/lib/apt/lists/*
however I don't know how to install PIP and related dependent libraries (eg: pymssql)?
Best option is installing miniconda on docker image. I used it always when I need to have python on docker image without python or pip.
Here is part for installing minicinda in my simple docker image
FROM debian
RUN apt-get update && apt-get install -y curl wget
RUN rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda --version

Use GPU on python docker image

I'm using a python:3.7.4-slim-buster docker image and I can't change it.
I'm wondering how to use my nvidia gpus on it.
I usually used a tensorflow/tensorflow:1.14.0-gpu-py3 and with a simple --runtime=nvidia int the docker run command everything worked fine, but now I have this constraint.
I think that no shortcut exists on this type of image so I was following this guide https://towardsdatascience.com/how-to-properly-use-the-gpu-within-a-docker-container-4c699c78c6d1, building the Dockerfile it proposes:
FROM python:3.7.4-slim-buster
RUN apt-get update && apt-get install -y build-essential
RUN apt-get --purge remove -y nvidia*
ADD ./Downloads/nvidia_installers /tmp/nvidia > Get the install files you used to install CUDA and the NVIDIA drivers on your host
RUN /tmp/nvidia/NVIDIA-Linux-x86_64-331.62.run -s -N --no-kernel-module > Install the driver.
RUN rm -rf /tmp/selfgz7 > For some reason the driver installer left temp files when used during a docker build (i dont have any explanation why) and the CUDA installer will fail if there still there so we delete them.
RUN /tmp/nvidia/cuda-linux64-rel-6.0.37-18176142.run -noprompt > CUDA driver installer.
RUN /tmp/nvidia/cuda-samples-linux-6.0.37-18176142.run -noprompt -cudaprefix=/usr/local/cuda-6.0 > CUDA samples comment if you dont want them.
RUN export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64 > Add CUDA library into your PATH
RUN touch /etc/ld.so.conf.d/cuda.conf > Update the ld.so.conf.d directory
RUN rm -rf /temp/* > Delete installer files.
But it raises an error:
ADD failed: stat /var/lib/docker/tmp/docker-builder080208872/Downloads/nvidia_installers: no such file or directory
What can I change to easily let the docker image see my gpus?
TensorFlow image split into several 'partial' Dockerfiles. One of them contains all dependencies TensorFlow needs to operate on GPU. Using it you can easily create a custom image, you only need to change default python to whatever version you need. This seem to me a much easier job than bringing NVIDIA's stuff into Debian image (which AFAIK is not officially supported for CUDA and/or cuDNN).
Here's the Dockerfile:
# TensorFlow image base written by TensorFlow authors.
# Source: https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/tools/dockerfiles/partials/ubuntu/nvidia.partial.Dockerfile
# -------------------------------------------------------------------------
ARG ARCH=
ARG CUDA=10.1
FROM nvidia/cuda${ARCH:+-$ARCH}:${CUDA}-base-ubuntu${UBUNTU_VERSION} as base
# ARCH and CUDA are specified again because the FROM directive resets ARGs
# (but their default value is retained if set previously)
ARG ARCH
ARG CUDA
ARG CUDNN=7.6.4.38-1
ARG CUDNN_MAJOR_VERSION=7
ARG LIB_DIR_PREFIX=x86_64
ARG LIBNVINFER=6.0.1-1
ARG LIBNVINFER_MAJOR_VERSION=6
# Needed for string substitution
SHELL ["/bin/bash", "-c"]
# Pick up some TF dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-${CUDA/./-} \
# There appears to be a regression in libcublas10=10.2.2.89-1 which
# prevents cublas from initializing in TF. See
# https://github.com/tensorflow/tensorflow/issues/9489#issuecomment-562394257
libcublas10=10.2.1.243-1 \
cuda-nvrtc-${CUDA/./-} \
cuda-cufft-${CUDA/./-} \
cuda-curand-${CUDA/./-} \
cuda-cusolver-${CUDA/./-} \
cuda-cusparse-${CUDA/./-} \
curl \
libcudnn7=${CUDNN}+cuda${CUDA} \
libfreetype6-dev \
libhdf5-serial-dev \
libzmq3-dev \
pkg-config \
software-properties-common \
unzip
# Install TensorRT if not building for PowerPC
RUN [[ "${ARCH}" = "ppc64le" ]] || { apt-get update && \
apt-get install -y --no-install-recommends libnvinfer${LIBNVINFER_MAJOR_VERSION}=${LIBNVINFER}+cuda${CUDA} \
libnvinfer-plugin${LIBNVINFER_MAJOR_VERSION}=${LIBNVINFER}+cuda${CUDA} \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*; }
# For CUDA profiling, TensorFlow requires CUPTI.
ENV LD_LIBRARY_PATH /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:$LD_LIBRARY_PATH
# Link the libcuda stub to the location where tensorflow is searching for it and reconfigure
# dynamic linker run-time bindings
RUN ln -s /usr/local/cuda/lib64/stubs/libcuda.so /usr/local/cuda/lib64/stubs/libcuda.so.1 \
&& echo "/usr/local/cuda/lib64/stubs" > /etc/ld.so.conf.d/z-cuda-stubs.conf \
&& ldconfig
# -------------------------------------------------------------------------
#
# Custom part
FROM base
ARG PYTHON_VERSION=3.7
RUN apt-get update && apt-get install -y --no-install-recommends --no-install-suggests \
python${PYTHON_VERSION} \
python3-pip \
python${PYTHON_VERSION}-dev \
# Change default python
&& cd /usr/bin \
&& ln -sf python${PYTHON_VERSION} python3 \
&& ln -sf python${PYTHON_VERSION}m python3m \
&& ln -sf python${PYTHON_VERSION}-config python3-config \
&& ln -sf python${PYTHON_VERSION}m-config python3m-config \
&& ln -sf python3 /usr/bin/python \
# Update pip and add common packages
&& python -m pip install --upgrade pip \
&& python -m pip install --upgrade \
setuptools \
wheel \
six \
# Cleanup
&& apt-get clean \
&& rm -rf $HOME/.cache/pip
You can take from here: change python version to one you need (and which is available in Ubuntu repositories), add packages, code, etc.

How to: Upgrade to Python 3.8.5 on ArchLinux / Raspbian / Volumio / Raspberry

I've faced the Problem, that my code needs at least Python 3.5... therfore I upgraded to Python 3.5.2.
Unfortunatly the support for Python 3.5.x ehas ended and the support for PIP 21.0 will end in a few month...
So I needed to upgrade aggain.
You can find the whole Code behind it here.
As soon I started to attemped the Upgrade/Upodate I noticed:
There are no Guides in the Web to install Python 3.8.5 on a Raspberry/ArchLinux/Raspbian
If you do the usual stepps you mess up SSL -> No Webinterface, No SSH, No GIT, No Pip Install...
So here you are, if you follow the steps, you should have a running Python 3.8.5 (Alt-)Installation.
Please Notice: In my Installation Steps I used the standard folder /home/USER/ -> Please change this to your Username! (For Volumio this would be: /home/volumio)
sudo apt-get update
sudo apt-get install -y build-essential libffi-dev libc6-dev libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev libncurses5-dev libncursesw5-dev libreadline6-dev libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev
cd
mkdir /home/USER/src
cd /home/USER/src && mkdir openssl && cd openssl
wget https://www.openssl.org/source/openssl-1.1.1b.tar.gz
tar xvf openssl-1.1.1b.tar.gz && cd openssl-1.1.1b
./config --prefix=/home/USER/src/openssl-1.1.1b --openssldir=/home/USER/src/openssl-1.1.1b && make && sudo make install
cd
echo "/home/USER/src/openssl-1.1.1b/lib" >> /etc/ld.so.conf.d
sudo ldconfig
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/home/USER/src/openssl-1.1.1b/lib
cd /home/USER/src && mkdir python && cd python
wget https://www.python.org/ftp/python/3.8.5/Python-3.8.5.tar.xz
tar xf Python-3.8.5.tar.xz
cd Python-3.8.5
sudo nano /home/USER/src/python/Python-3.8.5/Modules/Setup
Here please UN-comment lines 210-213, and change line 210 to:
SSL=/home/USER/src/openssl-1.1.1b
_ssl _ssl.c \
-DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \
-L$(SSL)/lib -lssl -lcrypto
Save and exit with: ctrl+x, y, enter
./configure --prefix=/home/USER/src/Python-3.8.5 --with-openssl=/home/USER/src/openssl-1.1.1b && make -j4 && sudo make altinstall
export PATH=~/home/USER/src/Python-3.8.5/bin:$PATH
export LD_LIBRARY_PATh=/home/USER/src/Python-3.8.5/bin
sudo /home/USER/src/Python-3.8.5/bin/pip3.8 install -U pip #
sudo /home/USER/src/Python-3.8.5/bin/pip3.8 install -U setuptools #
sudo /home/USER/src/Python-3.8.5/bin/pip3.8 install --upgrade setuptools pip wheel #
Now you are ready to go!
To use PIP3(.8) type:
sudo /home/USER/src/Python-3.8.5/bin/pip3.8 YOURCOMMAND --YOUROPTIONS
To use Python3(.8) type:
sudo /home/USER/src/Python-3.8.5/bin/python3.8 YOURCOMMAND --YOUROPTIONS
The Idea behind it: We Install OpenSSL 1.1.1b (needed by Python 3.8.5) to an alternative directory, so that the standard OpenSSL is still functional. After that, we Alt-Install Python 3.8.5 and tell it in the installation Process to use our custom OpenSSL Installation.
My Solution may not be the best, but it is functional.
If you have ideas how to make it better / simpler, please make a comment.
Cheers!

How to install graph-tool for Anaconda Python 3.5 on linux-64?

I'm trying to install graph-tool for Anaconda Python 3.5 on Ubuntu 14.04 (x64), but it turns out that's a real trick.
I tried this approach, but run into the problem:
The following specifications were found to be in conflict:
- graph-tool
Use "conda info <package>" to see the dependencies for each package.
Digging through the dependencies led to a dead-end at gobject-introspection
So I tried another approach:
Installed boost with conda, then tried to ./configure, make, and make install graph-tool... which got about as far as ./configure:
===========================
Using python version: 3.5.2
===========================
checking for boostlib >= 1.54.0... yes
checking whether the Boost::Python library is available... yes
checking whether boost_python is the correct library... no
checking whether boost_python-py27 is the correct library... no
checking whether boost_python-py27 is the correct library... (cached) no
checking whether boost_python-py27 is the correct library... (cached) no
checking whether boost_python-py35 is the correct library... yes
checking whether the Boost::IOStreams library is available... yes
configure: error: Could not link against boost_python-py35 !
I know this is something about environment variables for the ./configure command and conda installing libboost to Anaconda's weird place, I just don't know what to do, and my Google-fu is failing me. So this is another dead end.
Can anyone who's had to install graph-tool recently in linux-64 give me a walkthrough? It's a fresh VM running in VMWare Workstation 10.0.7
For those that run into similar issues, try changing the order of conda channels first with:
$ conda config --add channels ostrokach
$ conda config --add channels defaults
$ conda config --add channels conda-forge
then:
$ conda install graph-tool
Installing graph-tool 2.26 for Anaconda Python 3.5, Ubuntu 14.04.
Note: as of me writing this, the ostrokach channel conda install of graph-tool was only at version 2.18.
Here's the docker file I use to install graph-tool 2.26. There's likely a cleaner way, but so far this is the only thing I've managed to cobble together that actually works.
NOTE: If you're unfamiliar with docker files and you'd just like to do the install from the terminal, ignore the first line (starting with FROM), ignore every occurrence of the word RUN, and what you're left with is a series of commands to execute in a terminal.
FROM [your 14.04 base image]
RUN conda upgrade -y conda
RUN conda upgrade -y matplotlib
RUN \
add-apt-repository -y ppa:ubuntu-toolchain-r/test && \
apt-get update -y && \
apt-get install -y gcc-5 g++-5 && \
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 60 --slave /usr/bin/g++ g++ /usr/bin/g++-5
RUN wget https://github.com/CGAL/cgal/archive/releases/CGAL-4.10.2.tar.gz && \
tar xzf CGAL-4.10.2.tar.gz && \
cd cgal-releases-CGAL-4.10.2/ && \
cmake . && \
make && \
make install
RUN cd /tmp && \
# note: master branch of repo appears relatively stable, has not been updated since 2016
git clone https://github.com/sparsehash/sparsehash.git && \
cd sparsehash && \
./configure && \
make && \
make install
RUN apt-get update
RUN apt-get install -y build-essential g++ python-dev autotools-dev libicu-dev build-essential libbz2-dev libboost-all-dev
RUN apt-get install -y autogen autoconf libtool shtool
# install boost
RUN cd /tmp && \
wget https://dl.bintray.com/boostorg/release/1.66.0/source/boost_1_66_0.tar.gz && \
tar xzvf boost_1_66_0.tar.gz && \
cd boost_1_66_0 && \
sudo ./bootstrap.sh --prefix=/usr/local && \
sudo ./b2 && \
sudo ./b2 install
# install newer cairo
RUN cd /tmp && \
wget https://cairographics.org/releases/cairo-1.14.12.tar.xz && \
tar xf cairo-1.14.12.tar.xz && \
cd cairo-1.14.12 && \
./configure && \
make && \
sudo make install
RUN cd /tmp && \
wget https://download.gnome.org/sources/libsigc++/2.99/libsigc++-2.99.10.tar.xz && \
tar xf libsigc++-2.99.10.tar.xz && \
cd libsigc++-2.99.10 && \
./configure && \
make && \
sudo make install && \
sudo cp ./sigc++config.h /usr/local/include/sigc++-3.0/sigc++config.h
RUN cd /tmp && \
wget https://www.cairographics.org/releases/cairomm-1.15.5.tar.gz && \
tar xf cairomm-1.15.5.tar.gz && \
cd cairomm-1.15.5 && \
./configure && \
make && \
sudo make install && \
sudo cp ./cairommconfig.h /usr/local/include/cairomm-1.16/cairomm/cairommconfig.h
RUN conda install -y -c conda-forge boost pycairo
RUN conda install -y -c numba numba=0.36.2
RUN conda install -y -c libboost py-boost && \
conda update -y cffi dbus expat pycairo pandas scipy numpy harfbuzz setuptools boost
RUN apt-get install -y apt-file dbus libdbus-1-dev && \
apt-file update
RUN apt-get install -y graphviz
RUN conda install -y -c conda-forge python-graphviz
RUN sudo apt-get install -y valgrind
RUN apt-get install -y libcgal-dev libcairomm-1.0 libcairomm-1.0-dev libcairo2-dev python-cairo-dev
RUN conda install -y -c conda-forge pygobject
RUN conda install -y -c ostrokach gtk
RUN cd /tmp && \
wget https://git.skewed.de/count0/graph-tool/repository/release-2.26/archive.tar.bz2 && \
bunzip2 archive.tar.bz2 && \
tar -xf archive.tar && \
cd graph-tool-release-2.26-b89e6b4e8c5dba675997d6f245b301292a5f3c59 && \
# Fix problematic parts of the graph-tool configure.ac file
sed -i 's/PKG_INSTALLDIR/#PKG_INSTALLDIR/' ./configure.ac && \
sed -i 's/AM_PATH_PYTHON(\[2\.7\])/AM_PATH_PYTHON(\[3\.5\])/' ./configure.ac && \
sed -i 's/\${PYTHON}/\/usr\/local\/anaconda3\/bin\/python/' ./configure.ac && \
sed -i '$a ACLOCAL_AMFLAGS = -I m4' ./Makefile.am && \
sudo ./autogen.sh && \
sudo ./configure CPPFLAGS="-I/usr/local/include -I/usr/local/anaconda3/pkgs/pycairo-1.15.4-py35h1b9232e_1/include -I/usr/local/include/cairo -I/usr/local/include/sigc++-3.0 -I/usr/include/freetype2" \
LDFLAGS="-L/usr/local/include -L/usr/local/lib/cairo -L/usr/local/include/sigc++-3.0 -L/usr/include/freetype2" \
PYTHON="/usr/local/anaconda3/bin/python" \
PYTHON_VERSION=3.5 \
sudo make && \
sudo make install
Warning: makeing graph-tool might take a couple hours and require >7 GB of ram.

Categories

Resources