After building a conda package and installing it into a new empty environment my package cannot be imported due to it being placed into the python3.8/site-packages directory whereas the environment's python executable and all of the package dependencies are under python3.7.
Starting from a an empty env.:
conda create -n myenv
conda install --use-local mypackage
The resulting install ends up with the following:
~/miniconda3/envs/myenv/lib/python3.8/site-packages
|-mypackage/
|-mypackage-0.0.0-py3.8.egg.info/
~/miniconda3/envs/myenv/lib/python3.7/site-packages
|- all of the dependencies...
The resulting conda env also ends up having its python version set to 3.7 as well. So obviously, now when I open a python console and attempt to import my package it fails. The perplexing thing is that I do have an import test in my meta.yml that tests importing my package that seems to pass during the conda build process.
If I pin my meta.yml python version to python=3.7 instead of python>=3.7 it works. My package ends up installed in python3.7/site-packages with everything else and it works fine.
The relevant build requirements from my meta.yml:
requirements:
build:
- setuptools
- nodejs>=14.5.0
- mkdocs>=1.1.2
- mkdocs-material>=5.4.0
- mkdocs-material-extensions>=1.0
host:
- python
run:
- python>=3.7
- rabbitmq-server>=3.7.16
- pika>=1.1.0
- pyzmq>=19.0.1
- pyyaml>=5.3.1
- numpy>=1.18.5
- sqlalchemy>=1.3.18
- sqlite>=3.28.0
- netifaces>=0.10.9
- psutil>=5.7.0
- uvloop>=0.14.0
- numexpr>=2.7.1
- fastapi>=0.59.0
- uvicorn>=0.11.3
test:
imports:
- mypackage
The relevant line from my conda recipe build.sh:
$PYTHON setup.py install
Related
I'm trying out OpenCV with Python bindings for which I'm using the following YML file:
name: opencv-python-sandbox
channels:
- menpo
- conda-forge
- defaults
dependencies:
- jupyter=1.0.0
- jupyterlab=0.34.9
- keras=2.9.0
- matplotlib=3.5.2
- numpy=1.23.1
- opencv-python==4.6.0.66
- pandas=1.4.3
- python=3.8.0
- scikit-learn=1.1.1
- scipy=1.8.1
- tensorboard=2.9.1
- tensorflow=2.9.1
When I rain it threw some errors and says that it is not able to resolve OpenCV and Tensorflow:
(ml-sandbox) joesan#joesan-InfinityBook-S-14-v5:~/Projects/Private/ml-projects/ml-sandbox/opencv-python-sandbox$ conda env create -f environment.yml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- tensorflow=2.9.1
- opencv-python==4.6.0.66
How to get this fixed? Do I need to add pip to my environment.yml and then manually install opencv via pip after activating the conda environment?
Not sure why this was not answered by anyone else as this seems to be a very common problem. Nevertheless, I was able to solve this by adding pip as a dependency in my environment.yml and use pip to install OpenCV and any other libraries that won't resolve with Conda.
My environment.yml looks like this:
name: ml-sandbox
channels:
- menpo
- conda-forge
- defaults
dependencies:
- jupyter=1.0.0
- jupyterlab=0.34.9
- keras=2.9.0
- matplotlib=3.5.2
- pandas=1.4.3
- python=3.8.0
- pip=22.1.2
- scikit-learn=1.1.1
- scipy=1.8.1
- tensorboard=2.9.1
- pip:
- numpy==1.23.1
- opencv-contrib-python==4.6.0.66
You have fixed it yourself by moving the requirements to the pip section, which results in an installation from Pypi. I just wanted to add explanation why your original attempt did not work and suggestions in case you want to strictly stick to using conda. Note that for both tensorflow and opencv, the packages provided on conda-forge are not maintained by the respective developers, often resulting in them lacking behind in versions.
The python bindings for openCV are called py-opencv on conda forge and have different version strings, so you would need to put py-opencv==4.6.0 in your yml
tensorflow on conda-forge goes only up to 2.8.1. So when strictly sticking to conda, you would need to downgrade the version
You can always check available versions for packages by using conda search -c <channel> <package-name> from your terminal
According to the documentation, if I use
conda env export > file.yml
I am able to share the environment with others. For better cross-platform compatibility, a better way would be:
conda env export --from-history > file.yml
listing only the packages explicitly requested (and not their associated dependencies).
That is what I did, I created a requirement yml file with the second command. Here it is:
name: torch
channels:
- defaults
dependencies:
- python=3.8
- humanize
- nltk
- pandas
- lxml
- numpy
- bs4
- fire
- neptune-client
- tqdm
- pyyaml
- torchaudio
- pytorch
- cudatoolkit=11.3
- torchvision
Among those packages, some were installed from the channel conda-forge: channels seem to be lost in the yaml file.
Indeed, if I try and use that file for cloning the environment (same machine):
conda env create -n torch2 --file=file.yml
I get an error for the packages installed from conda-forge (I explicitly installed from conda-forge only neptune-client and fire):
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- torchaudio
- neptune-client
- fire
However, it seems that channels should be included in the yml. For example, on this github issue page I read:
Currently, conda env export does include channels information.
that closes the issue.
What am I missing?
NOTE: pytorch was installed with
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
from the official web page.
What am I missing?
The docs are wrong - or misleading (at best) when it is communicated that conda env export --from-history exports the channels that packages are installed from, or even any channels at all. This is not the behavior you get, nor is it what I get myself:
$ conda env export | head -n 8
name: smithy
channels:
- bioconda
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=1_gnu
$ conda env export --from-history | head -n 8
name: smithy
channels:
- defaults
dependencies:
- mamba
- constructor
- cookiecutter
- conda-build
Note that conda env export does include channel information, but in a highly-pinned way that's almost guaranteed to not work across platforms. So that's not going to work for your use case. I'm not sure if this is a bug or an oversight, but it's clearly not producing the desired result for the user.
Now to offer a (opinionated) recommendation on how to proceed: your best bet is to semi-manually curate an environment YAML file yourself and use that as a single source of truth. It looks like you can use your name: torch ... file a s a starting point, adding in the channels and maybe some other details as you go. Don't forget you can tie an individual package to a channel with the channel::package syntax a la
name: torch
channels:
- defaults
- conda-forge
- torch
dependencies:
- python=3.8
<SNIP>
- pytorch::torchaudio
- pytorch::pytorch
- pytorch::cudatoolkit=11.3
- pytorch::torchvision
I have a meta.yaml recipe for conda, to build a package. (we will call it mypackage)
I want this package to use a local (tar.bz2) file in his requirement section (build&run) (we will call it locapackagedep)
Here is an example of what I would like to do
requirements:
build:
- setuptools
- wheel
- nodejs=16
- yarn
- jupyterlab
- /my/path/to/locapackagedep
- ipympl
host:
- python {{ python }}
run:
- python=3.8
- jupyterlab
- locapackagedep
I can't find any doc on it ....
I believe one needs to specify the package by name, and use the -c flag to indicate a local path that contains the build.
Something like:
requirements:
build:
- setuptools
- wheel
- nodejs=16
- yarn
- jupyterlab
- locapackagedep
- ipympl
host:
- python {{ python }}
run:
- python=3.8
- jupyterlab
- locapackagedep
and
conda build -c file://my/path/to .
Good night! I have a .yml file with this structure:
name: web
channels:
- defaults
dependencies:
- zope.event=4.4=py37_0
- zope.interface=5.1.0=py37haf1e3a3_0
- zstd=1.4.5=h41d2c2f_0
- pip:
- asgiref==3.2.10
- cloudpickle==1.3.0
that is actually mutch bigger than this. When i run conda env create --file ambiente.yml i get Solving environment: failed ResolvePackageNotFound: and a list of all missing dependencies. How can a install all dependencies at once?
I am facing a strange problem which I could track down to the python logging package. Let me shortly explain what I want to do: the goal is to create html reports with python. I am using the R rmarkdown package which runs python code trough reticulate using a local virtualenv.
The Problem:
As soon as I install the python logging package rmarkdown runs into a problem when loading matplotlib. I have written a small test script to reproduce the example.
My system:
Ubuntu 18.04 bionic
Python 2.7.15rc1
Test "1" script (without logging):
Create a new virtualenv (venv).
Use venv/bin/pip to install matplotlib.
Run reticulate::import (at the end via rmarkdown::render).
Test "2" script (with logging):
Create a new virtualenv (venv).
In addition to the first test: install logging via venv/bin/pip.
Use venv/bin/pip to install matplotlib.
Run reticulate::import (at the end via rmarkdown::render).
The modules installed (virtualenv):
backports.functools-lru-cache 1.5
cycler 0.10.0
kiwisolver 1.0.1
logging 0.4.9.6 <- only for "test 2"
matplotlib 2.2.3
numpy 1.15.1
pip 18.0
pkg-resources 0.0.0
pyparsing 2.2.0
python-dateutil 2.7.3
pytz 2018.5
setuptools 40.2.0
six 1.11.0
subprocess32 3.5.2
wheel 0.31.1
The system site packages do have the same module version.
Results:
All tests from test 1 (without logging) work nicely.
The tests from test 2 (with loging) fail when using the virtualenv. When calling rmarkdown::render (see below), when using the system python installation (not virtualenv) they work nice as well.
There seem to be something strange with reticulate when logging is installed in a virtualenenv.
The output of the test script (see below):
The full output including the error:
----------- no logging package installed ------------
Module(matplotlib)
Module(matplotlib)
--------- with logging package installed ------------
Error in py_module_import(module, convert = convert) :
AttributeError: 'module' object has no attribute 'NullHandler'
Detailed traceback:
File "/home/retos/Downloads/venvtest/venv/lib/python2.7/site-packages/matplotlib/__init__.py", line 168, in <module>
_log.addHandler(logging.NullHandler())
Calls: <Anonymous> -> py_module_import -> .Call
Execution halted
Module(matplotlib)
The Module(matplotlib) output is the success message of loading the module via reticulate::import. As one can see only the one test fails where the virtualenv is used with installed logging python module.
Anyone having an idea what could case these problems? I spent quite some time to identify the source of the error, but I am kind of lost now ...
Test script to reproduce the error:
Here is a small bash/shell script to reproduce my tests.
#!/bin/bash
# New virtual environment and install matplotlib
echo " ----------- no logging package installed ------------"
if [ -d venv ] ; then rm -rf venv ; fi
virtualenv venv &>/dev/null > /dev/null
venv/bin/pip install matplotlib > /dev/null
# Print installed packages
Rscript -e "reticulate::use_python('venv/bin/python'); reticulate::import('matplotlib')"
Rscript -e "reticulate::import('matplotlib')"
# New virtual environment and install logging and matplotlib
echo " --------- with logging package installed ------------"
if [ -d venv ] ; then rm -rf venv ; fi
virtualenv venv > /dev/null
venv/bin/pip install logging > /dev/null
venv/bin/pip install matplotlib > /dev/null
# Print installed packages
Rscript -e "reticulate::use_python('venv/bin/python'); reticulate::import('matplotlib')"
Rscript -e "reticulate::import('matplotlib')"
I first thought it is related to the problem "ImportError: cannot import name cbook" but the solution there did not work.
May thanks in advance!
R
Logging became a standard module included in Python library in version 2.3. You must not install it from PyPI. Remove it ASAP:
pip uninstall logging