I want to run a setup.py that has several dependency links defined, i.e.:
dependency_links=[
"git+ssh://git#my.gitlab/myproject1.git#v1.4.0#egg=myproject1-1.4.0",
"git+ssh://git#my.gitlab/myproject2.git#v0.2.0#egg=myproject2-0.2.0",
]
My question is whether there is a way to pass this array via command line when running the pip install command as follows (e.g.):
RUN pip install numpy==1.10.1 \
&& pip wheel --find-links="wheelhouse/" --wheel-dir="wheelhouse/" uwsgi \
&& pip wheel --find-links="wheelhouse/" --wheel-dir="wheelhouse/" --process-dependency-links gitlablink1, gitlablink2 .
Related
So I'm trying to install FEniCS from the instructions here. I did the
pip3 install fenics-ffc --upgrade
inside my virtualenv and it worked but when I try to import dolfin I get a ModuleNotFound error. I'm not sure how to get dolfin installed. I did
pip install pybind11
to get pybind11 installed then copied the code for dolfin installation into my cmd
FENICS_VERSION=$(python3 -c"import ffc; print(ffc.__version__)")
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/dolfin
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/mshr
mkdir dolfin/build && cd dolfin/build && cmake .. && make install && cd ../..
mkdir mshr/build && cd mshr/build && cmake .. && make install && cd ../..
cd dolfin/python && pip3 install . && cd ../..
cd mshr/python && pip3 install . && cd ../..
but it just spat out dozens of errors like:
FENICS_VERSION=$(python3 -c"import ffc; print(ffc.version)") 'FENICS_VERSION' is not recognized as an internal or external command, operable program or batch file.
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/dolfin Cloning into 'dolfin'...
fatal: Remote branch $FENICS_VERSION not found in upstream origin
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/mshr Cloning into 'mshr'...
fatal: Remote branch $FENICS_VERSION not found in upstream origin
There were lots more errors after too. Am I not supposed to paste the dolfin code into cmd? I don't know much about this stuff so unsure of how to get the dolfin module. I've previously only used pip to get my packages but this does not work for dolfin as it doesn't appear to be on PyPI.
Do you have cmake? It says in the docs you need it. Also its says to to this to install pybind11 not pip install pybind11
For building optional Python interface of DOLFIN and mshr, pybind11 is needed since version 2018.1.0. To install it:
wget -nc --quiet https://github.com/pybind/pybind11/archive/v${PYBIND11_VERSION}.tar.gz
tar -xf v${PYBIND11_VERSION}.tar.gz && cd pybind11-${PYBIND11_VERSION}
mkdir build && cd build && cmake -DPYBIND11_TEST=off .. && make install
Also what is your os?
So here is how you can install fenics 2019.1 using conda (miniconda):
Install Conda:
First go to https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html
and follow the installation instructions.
Create a conda environment for fenics:
Open a terminal and type:
conda create -n fenics
To activate the created environment "fenics", type:
conda activate fenics
If you want the fenics environment to be activated automatically everytime you open a new terminal, then open you .bashrc file (should be under /home/username/.bashrc) and add the line "source activate fenics" below the ">>> conda initialize >>>" block.
Install fenics:
Type all these commands:
conda install -c conda-forge h5py=*=*mpich*
conda install -c conda-forge fenics
pip install meshio
pip install matplotlib
pip install --upgrade gmsh
conda install -c conda-forge paraview
pip install scipy
The second command will take a while. I added a few nice to have programs like gmsh and paraview which will help you to create meshes and view your solutions.
Some background : I'm new to understanding docker images and containers and how to write DOCKERFILE. I currently have a Dockerfile which installs all the dependencies that I want through PIP install command and so, it was very simple to build and deploy images.
But I currently have a new requirement to use the Dateinfer module and that cannot be installed through the pip install command.
The repo has to be first cloned and then has to be installed and I'm having difficulty achieving this through a DOCKERFILE. The current work around I've been following for now is to run the container and install it manually in the directory with all the other dependencies and Committing the changes with dateinfer installed.But this is a very tedious and time consuming process and I want to achieve the same by just mentioning it in the DOCKERFILE along with all my other dependencies.
This is what my Dockerfile looks like:
FROM ubuntu:20.04
RUN apt update
RUN apt upgrade -y
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get -y install tzdata
RUN apt-get install -y libenchant1c2a
RUN apt install git -y
RUN pip3 install argparse
RUN pip3 install boto3
RUN pip3 install numpy==1.19.1
RUN pip3 install scipy
RUN pip3 install pandas
RUN pip3 install scikit-learn
RUN pip3 install matplotlib
RUN pip3 install plotly
RUN pip3 install kaleido
RUN pip3 install fpdf
RUN pip3 install regex
RUN pip3 install pyenchant
RUN pip3 install openpyxl
ADD core.py /
ENTRYPOINT [ "/usr/bin/python3.8", "/core.py”]
So when I try to install Dateinfer like this:
RUN git clone https://github.com/nedap/dateinfer.git
RUN cd dateinfer
RUN pip3 install .
It throws the following error :
ERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
The command '/bin/sh -c pip3 install .' returned a non-zero code: 1
How do I solve this?
Each RUN directive in a Dockerfile runs in its own subshell. If you write something like this:
RUN cd dateinfer
That is a no-op: it starts a new shell, changes directory, and then the shell exits. When the next RUN command executes, you're back in the / directory.
The easiest way of resolving this is to include your commands in a single RUN statement:
RUN git clone https://github.com/nedap/dateinfer.git && \
cd dateinfer && \
pip3 install .
In fact, you would benefit from doing this with your other pip install commands as well; rather than a bunch of individual RUN
commands, consider instead:
RUN pip3 install \
argparse \
boto3 \
numpy==1.19.1 \
scipy \
pandas \
scikit-learn \
matplotlib \
plotly \
kaleido \
fpdf \
regex \
pyenchant \
openpyxl
That will generally be faster because pip only needs to resolve
dependencies once.
Rather than specifying all the packages individually on the command
line, you could also put them into a requirements.txt file, and then
use pip install -r requirements.txt.
There is a python project in which I have dependencies defined with the help of "requirement.txt" file. One of the dependencies is gmpy2. When I am running docker build -t myimage . command it is giving me following error at the step when setup.py install is getting executed.
In file included from src/gmpy2.c:426:0:
src/gmpy.h:252:20: fatal error: mpfr.h: No such file or directory
include "mpfr.h"
similarly other two errors are:
In file included from appscript_3x/ext/ae.c:14:0:
appscript_3x/ext/ae.h:26:27: fatal error: Carbon/Carbon.h: No such file
or directory
#include <Carbon/Carbon.h>
In file included from src/buffer.cpp:12:0:
src/pyodbc.h:56:17: fatal error: sql.h: No such file or directory
#include <sql.h>
Now question is how i can define or install these internal dependencies required for successful build of image. As per my understanding gmpy2 is written in C and depends on three other C libraries: GMP, MPFR, and MPC and it is unable to find this.
Following is my docker-file:
FROM python:3
COPY . .
RUN pip install -r requirement.txt
CMD [ "python", "./mike/main.py" ]
Install this apt install libgmp-dev libmpfr-dev libmpc-dev extra dependency and then RUN pip install -r requirement.txt
i think it will work and you will be able to install all the dependency and build docker image.
FROM python:3
COPY . .
RUN apt-get update -qq && \
apt-get install -y --no-install-recommends \
libmpc-dev \
libgmp-dev \
libmpfr-dev
RUN pip install -r requirement.txt
CMD [ "python", "./mike/main.py" ]
if apt not run you can use Linux as base image.
You will need to modify your Dockerfile to install the additional C libraries using apt-get install. (The default Python 3 image is based on a Debian image).
sudo apt-get install libgmp3-dev
sudo apt-get install libmpfr-dev
It looks like you can install the dependencies for pyodbc using
sudo apt-get install unixodbc-dev
However, I'm really unsure about the requirement for Carbon.h as that's an OS X specific header file. You may have an OS X specific dependency in your requirements file that won't work on a Linux based image.
Django struggles to find Pillow and I'm not quite sure why.
Environment
Linux Alpine based Docker image, Django 2.2. Here are the relevant parts of:
The Dockerfile
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev jpeg-dev zlib-dev \
&& apk add --no-cache mariadb-dev mariadb-client
# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
RUN pip install mysqlclient
COPY ./Pipfile /usr/src/cms/Pipfile
RUN pipenv install --skip-lock --system --dev
RUN apk del build-deps
The Pipfile
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
django = "==2.2"
markdown = "==3.1.1"
pillow = "==5.0.0"
[requires]
python_version = "3.6"
The issue
When I run python manage.py runserver 0.0.0.0:8000 from the container, I get the following error:
django.core.management.base.SystemCheckError: SystemCheckError: System check identified some issues:
ERRORS:
website.Photo.photo: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
which is weird because pip install Pillow gives me
Requirement already satisfied: Pillow in /usr/local/lib/python3.7/site-packages (5.4.1)
About Pillow's conflict with PIL
While having a look at /usr/local/lib/python3.7/site-packages, I noticed that I had both PIL and Pillow. Is this:
the source of a conflict (Pillow's documentation) is quite specific about the need to uninstall PIL
pillow's use of the very name PIL to maintain compatibility as suggested in this discussion?
From the facts that i) pip uninstall PIL -> not installed ii) print(PIL.PILLOW_VERSION) -> 5.0.0 in python's shell and that iii) Django uses from PIL import Image source, I would go for hypotheses 2. So if Pillow is installed in the container, why does not Django find it?
Current path
>>> from PIL import Image
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/PIL/Image.py", line 58, in <module>
from . import _imaging as core
ImportError: Error loading shared library libjpeg.so.8: No such file or directory (needed by /usr/local/lib/python3.7/site-packages/PIL/_imaging.cpython-37m-x86_64-linux-gnu.so)
I added jpeg-dev to the Dockerfile but, somehow, it does not seem to be enough. Still digging. Thanks for any clue
Turns out, jpeg-dev (required by the compilation) was not enough to satisfy all dependencies during execution. Adding libjpeg solved the issue. Updated Dockerfile
# install mysqlclient
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev jpeg-dev zlib-dev \
&& apk add --no-cache mariadb-dev mariadb-client
# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
RUN pip install mysqlclient
RUN apk add libjpeg -------------> et voila
COPY ./Pipfile /usr/src/cms/Pipfile
RUN pipenv install --skip-lock --system --dev
RUN apk del build-deps
In my case, as I am using Docker, I had to add
RUN python -m pip install Pillow
into my DockerFile, before RUN pip install -r ./requirements/prod.txt.
Simply running python -m pip install Pillow in the terminal
wouldn't work.
I have a Python package configured like this:
# setup.py
from setuptools import setup
setup(
name='python-package-test',
version='0.0.1',
packages=['python-package-test'],
dependency_links=[
# This repo actually exists
'git+https://github.com/nhooey/tendo.git#increase-version-0.2.9#egg=tendo',
],
install_requires=[
'tendo',
],
)
When I install this package from setup.py:
$ virtualenv --python=python3 .venv && \
source .venv/bin/activate && \
python setup.py install
$ pip freeze | grep tendo
tendo==0.2.9 # Note that this is the *correct* version
It installs the correct version of tendo.
However, when I upload this package in a Git repository and install it with pip:
# The GitHub link doesn't exist as it's private
# and it's different from the repo mentioned above
virtualenv --python=python3 .venv && \
source .venv/bin/activate && \
pip install git+ssh://git#github.com/nhooey/package.git
$ pip freeze | grep tendo
tendo==0.2.8 # Note that this is the *wrong* version
It installs the wrong version of tendo.
Why is the setup.py installation behaving differently from pip + git?
You must use the --process-dependency-links option when installing with Pip, since Pip no longer processes this automatically.
pip install --process-dependency-links 'git+ssh://git#github.com/nhooey/package.git'
You'd think Pip would print a warning, or the updated version of setuptools would also ignore dependency_links as well.