Install Python.Net in docker images (linux containers) - python

I'm attempting to create a simple Python script that calls a .Net function through Python.Net (http://pythonnet.github.io/).
I'm trying to configure a Docker image containing both Python, .Net and, of course, Python.net, possibly at their latest releases.
I attempted in many ways, starting from several base machines (Ubuntu, Debian, Alpine, ...).
I managed to install Python (I tried with several flavours and versions) and .Net (or Mono), but, every time I get stuck when installing Python.Net.
I get errors like those discussed in Pip Pythonnet option --single-version-externally-managed not recognized or in Error Installing Python.net on Python 3.7 (suggested solutions in those posts didn't work for me).
As an example, here's a (failing!) dockerfile:
FROM python:slim-buster
# Install .Net
RUN apt update
RUN apt install wget -y
RUN wget https://dot.net/v1/dotnet-install.sh
RUN chmod 777 ./dotnet-install.sh
RUN ./dotnet-install.sh -c 5.0
#install Python.Net
RUN pip install pythonnet
Another attempt is:
FROM continuumio/anaconda
# Install .Net
RUN apt update
RUN apt install wget -y
RUN wget https://dot.net/v1/dotnet-install.sh
RUN chmod 777 ./dotnet-install.sh
RUN ./dotnet-install.sh -c 5.0
#install Python.Net
# See: https://stackoverflow.com/a/61948638/1288109
RUN conda create -n myenv python=3.7
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
RUN conda install -c conda-forge pythonnet
ENV PATH=/root/.dotnet/:${PATH}
The dockerfile above builds correctly, but when running the container with a /bin/bash, issuing:
conda activate myenv
python myscript.py
I get an error like:
Traceback (most recent call last):
File "factorial.py", line 1, in <module>
import clr
ImportError: System.TypeInitializationException: The type initializer for 'Sys' threw an exception. ---> System.DllNotFoundException: /opt/conda/envs/myenv/lib/../lib/libmono-native.so assembly:<unknown assembly> type:<unknown type> member:(null)
(solutions found around the Internet, like https://github.com/pythonnet/pythonnet/issues/1034#issuecomment-626373989 didn't work, neither did attempts to use different versions of Python and .Net)
To be noted that the same error occurs also replacing .Net with Mono.
How can a docker image be built, that can run a simple Python script that makes use of Python.Net?

Related

How to run an application based on TensorFlow 2 in Docker container?

I am relatively new to TensorFlow, so I have been trying to run simple applications locally, and everything was going well.
At some point I wanted to Dockerize my application. Building the Docker image went with no errors, however, when I tried to run my application, I received the following error:
AttributeError: module 'tensorflow' has no attribute 'gfile'. Did you mean: 'fill'?
After googling about the problem, I understood that it is caused by version differences between TF1 and TF2.
One of the explanation about the problem I found is found here.
Locally, I am using TF2 (specifically 2.9.1), inside a virtual environment.
When dockerizing, I also confirmed from inside the docker container that my TF version is the same.
I also tried to run the container in interactive mode, and create virtual environment, and install all dependencies manually, exactly the same way I did locally, but still with no success.
My Dockerfile is as follows:
FROM python:3-slim
# ENV VIRTUAL_ENV=/opt/venv
# RUN python3 -m venv $VIRTUAL_ENV
# ENV PATH="$VIRTUAL_ENV/bin:$PATH"
WORKDIR /objectDetector
RUN apt-get update
RUN apt-get install -y protobuf-compiler
RUN apt-get install ffmpeg libsm6 libxext6 -y
RUN pip3 install update && python3 -m pip install --upgrade pip
RUN pip3 install tensorflow==2.9.1
RUN pip3 install tensorflow-object-detection-api
RUN pip3 install opencv-python
RUN pip3 install opencv-contrib-python
COPY detect_objects.py .
COPY detector.py .
COPY helloWorld.py .
ADD data data /objectDetector/data/
ADD models /objectDetector/models/
So my question is: How can I ran an application using TensorFlow 2 from a docker container?
Am I missing something here?
Thanks in advance for any help or explanation.
I believe that in tensorflow 2.0 :
tf.gfile was replaced by tf.io.gfile
Can you try this ?
Have a nice day,
Gabriel

Python script cannot find installed libraries inside container?

I use the following Dockerfile, which builds fine into an image:
FROM python:3.8-slim-buster
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONPATH="$PYTHONPATH:/app"
WORKDIR /app
COPY ./app .
RUN apt-get update
RUN apt-get install -y python3-pymssql python3-requests
CMD ["python3", "main.py"]
main.py:
from pymssql import connect, InterfaceError, OperationalError
try:
conn = connect(host, username, password, database)
except (InterfaceError, OperationalError) as e:
raise
conn.close()
For some reason, the python3-pymssql installed libraries are not present even if I see them getting installed during the docker build process, when I do a docker run I get the following error:
Traceback (most recent call last):
File "main.py", line 1, in <module>
from pymssql import connect, InterfaceError, OperationalError
ModuleNotFoundError: No module named 'pymssql'
I presume I could use a pip3 install but I prefer to take advantage of pre-build apt packages. Can you please let me know what am I missing?
Thank you for your help.
You have two copies of Python installed, 3.8 and 3.7:
root#367f37546ae7:/# python3 --version
Python 3.8.12
root#367f37546ae7:/# python3.7 --version
Python 3.7.3
The image you're basing this on, python:3.8-slim-buster, is based on Debian Buster, which uses Python 3.7 in its packages. So if you install an apt package, that's going to install Python 3.7 and install the package for Python 3.7. Then, you launch Python 3.8, which lacks the pymssql dependency.
You can:
Use a non-Python image, and install Python using apt. This way, you're guaranteed that the version of Python installed will be compatible with the Python dependencies from apt. For example, you could base your image on FROM debian:buster.
Use a Python image, and install your dependencies with pip.
You can install pymssql with pip only. The following Dockerfile builds and runs fine:
FROM python:3.8-slim-buster
ENV PYTHONDONTWRITEBYTECODE 1
RUN pip install pymssql
WORKDIR /app
COPY ./app .
CMD ["python3", "main.py"]

Cannot use the Python that I manually installed in my Singularity container. Why?

I have been creating new Singularity containers with Ubuntu 16.04 every 1-2 months since a couple of days ago and have been using it to run my Python script on our cluster (CentOS 7). However, after making a new Singularity container with a new OS (Ubuntu 18.04 instead of 16.04) and installing the latest Python 3.7 version as follow I cannot run my Python script on our cluster anymore as I get Python import errors.
For your reference, here's how I installed Python 3.7:
apt-get -y install python3.7 \
python3.7-dev
# Then I install pip and install some packages as follow:
cd /
wget https://bootstrap.pypa.io/get-pip.py
python3.7 get-pip.py
rm get-pip.py
python3.7 -m pip install -U pip
python3.7 -m pip install --upgrade pip
python3.7 -m pip install numpy
And here's how I use my Singularity container to run my script:
singularity exec --nv -B /om:/om /mySingImgUbuntu18.sif python3.7 main.py
where main.py simply does import numpy.
Running my script as above gives me Python import errors for the packages that I have installed in the new Python3.7 as if Singularity is using another installation of Python3.7 (which does not exist as 18.04's default Python version is 3.6.9 located in /usr/bin/python3). And here's the error I get:
ModuleNotFoundError: No module named 'numpy'
When I use either exec or shell and do which python3.7 I get the following. Just to make things clear, though obvious, when using exec I do singularity exec --nv -B /om:/om /mySingImgUbuntu18.sif which python3.7 :
/usr/bin/python3.7
Which is the correct path but my script does not run in exec mode and throws Python import errors. If I shell into the container and run either /usr/bin/python3.7 or python3.7 and go into interactive mode and then do the import everything works fine. Doing /usr/bin/python3.7 -c 'import numpy' also works. So now I am confused on why this is happening when in exec mode ... .
Additionally, adding sys.path to main.py and running my script via exec returns the following which shows the correct paths:
['/om/user/arsalans/Occluded-object-detector', '/usr/lib/python37.zip', '/usr/lib/python3.7', '/usr/lib/python3.7/lib-dynload', '/usr/local/lib/python3.7/dist-packages', '/usr/local/lib/python3.7/dist-packages/bayesian_optimization-0.6.0-py3.7.egg', '/usr/local/lib/python3.7/dist-packages/torchvision-0.6.0a0+6c2cda6-py3.7-linux-x86_64.egg', '/usr/lib/python3/dist-packages']
Doing ls /usr/local/lib/python3.7/dist-packages shows the following which shows that the printed paths are all correct:
Can someone tell me what am I doing wrong and what should I do to fix this?
Here's some more info for Singularity and the OS version that I'm using:
Singularity version: 3.5.0, installed from source
cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Update
The issue got resolved magically after building my container the second time!

Docker and Conda: Differences when building the same container on Mac and on Ubuntu

I'm using to Docker to build a Python container with the intention of having a reproducible environment on several machines, which are a bunch of development Macbooks and several AWS EC2 servers.
The container is based on continuumio/miniconda3, i.e. Dockerfile starts with
FROM continuumio/miniconda3
A few days ago on Ubuntu the conda install and conda upgrade commands in the Docker file complained about a new Conda version (4.11) being available:
==> WARNING: A newer version of conda exists. <==
current version: 4.4.10
latest version: 4.4.11
Please update conda by running
$ conda update -n base conda
If I ignore this, package installations quit with an error:
Downloading and Extracting Packages
The command '/bin/sh -c conda install -y pandas=0.22.0 matplotlib
scikit-learn=0.19.1 pathos lazy openpyxl pytables dill pydro psycopg2
sqlalchemy pyarrow arrow-cpp parquet-cpp scipy tensorflow keras
xgboost' returned a non-zero code: 1
When I add this conda update... to the Docker file, things work again.
What's really annoying, however, is that the update that makes things run in Ubuntu does not work on Mac Docker. I get the following error:
CondaEnvironmentNotFoundError: Could not find environment: base .
You can list all discoverable environments with `conda info --envs`.
Note that I get this error when I docker build the same Docker file that works on the Ubuntu machine, which kind of ruins the whole point about using Docker in the first place. On the Mac, the old version of the file (without conda update -n base conda) still runs fine and installs all packages.
Docker / Conda experts, any ideas?
Edit: Here's the full Dockerfile (the one that works in Ubuntu):
# Use an official Python runtime as a parent image
FROM continuumio/miniconda3
WORKDIR /app/dev/predictive.analytics
RUN apt-get update; \
apt-get install -y gcc tmux htop
RUN conda update -y -n base conda
RUN conda config --add channels babbel; \
conda config --add channels conda-forge;
RUN conda install -y pandas=0.22.0 matplotlib scikit-learn=0.19.1 pathos lazy openpyxl pytables dill pydro psycopg2 sqlalchemy pyarrow arrow-cpp parquet-cpp scipy tensorflow keras xgboost
RUN pip install recordclass sultan
RUN conda upgrade -y python
ENV DATA_DIR /host/data
ENV PYTHONPATH /host/predictive.analytics/python
ENV PATH="/host/predictive.analytics:${PATH}"
Perhaps you're using an outdated miniconda on one of the build machine, try doing docker build --pull --no-cache.
Docker doesn't necessarily pull the latest image from the repository, so unless you do a --pull, it is possible that some of your machines may be starting the build with outdated base image.

How to create a Python 3.5 virtual environment with Python 2.7?

My system is running CentOS 6. I do not have admin access, so sudo is not available. I have Python 2.7.3 available, along with pip and virtualenv. I was hoping that I could use these to set up a new virtual environment in which to install & run Python 3.5 or above.
I tried the method described here:
Using Python 3 in virtualenv
But got this error:
$ virtualenv -p python3 venv
The path python3 (from --python=python3) does not exist
My system also has a Python 3.4 module installed, so I tried that, however virtualenv does not seem to work there:
$ module load python/3.4.3
$ virtualenv -p python3 venv
-bash: virtualenv: command not found
This appears to make sense since virtualenv is only installed for Python 2.7:
$ module unload python
$ module load python/2.7
$ which virtualenv
/local/apps/python/2.7.3/bin/virtualenv
So, the next logical step would seem to be installing virtualenv for my Python 3... but this does not work either:
$ pip3 install virtualenv
Traceback (most recent call last):
File "/local/apps/python/3.4.3/bin/pip3", line 7, in <module>
from pip import main
ImportError: cannot import name 'main'
also
$ pip3 install --user virtualenv
Traceback (most recent call last):
File "/local/apps/python/3.4.3/bin/pip3", line 7, in <module>
from pip import main
ImportError: cannot import name 'main'
I started Google'ing this new error message, but did not see anything that seemed relevant for this situation. Any ideas? Even if I could get virtualenv installed on my Python 3.4 module, would I still be unable to upgrade it to Python 3.5+?
To round things out, I also tried to compile my own Python 3.6 from source, but that does not work either:
Python-3.6.0$ make install
if test "no-framework" = "no-framework" ; then \
/usr/bin/install -c python /usr/local/bin/python3.6m; \
else \
/usr/bin/install -c -s Mac/pythonw /usr/local/bin/python3.6m; \
fi
/usr/bin/install: cannot create regular file `/usr/local/bin/python3.6m': Permission denied
make: *** [altbininstall] Error 1
more background info:
$ which pip3
/local/apps/python/3.4.3/bin/pip3
$ which python
/local/apps/python/3.4.3/bin/python
You can download miniconda or Anaconda. It does not require superuser privileges because it installs in your home directory. After you install you can create new environments like this:
conda create -n py35 python=3.5
Then you can switch to the new environment:
source activate py35
Try this for Windows.
virtualenv -p C:\Python35\python.exe django_concurrent_env
cd django_concurrent_env
.\Source\activate
deactivate
Try out the following commands:
pip3 install virtualenv
pip3 install virtualenvwrapper
mkdir ~/.virtualenvs
export WORKON_HOME=~/.virtualenvs
source /usr/local/bin/virtualenvwrapper.sh
source ~/.bash_profile
which python3
Now copy the result of path of python3 in the last command and put it in the following command:
mkvirtualenv --python=python3/path/in/last/command myenv
I'm assuming pip3 is already installed. If not, install it before running these commands.
Source: https://docs.coala.io/en/latest/Help/MAC_Hints.html#create-virtual-environments-with-pyvenv
(I do have sudo access on my machine. I've not tried the commands without it. Please post if any issues comes.)
Since you already have virtualenv installed, you might only need to update the files and then run the command mkvirtualenv with proper arguments.

Categories

Resources