I think it doesn't make a difference here but I'm using Python 2.7.
So the general part of my question is the following: I use a separate virtualenv for each of my projects. I don't have administrator access and I don't want to mess with system-installed packages anyway. Naturally, I want to use wheels to speed up package upgrades and installations across the virtualenvs. How can I build a wheel whose dependencies are only met within a specific virtualenv?
Specifically, issuing
pip wheel -w $WHEELHOUSE scipy
fails with
Building wheels for collected packages: scipy
Running setup.py bdist_wheel for scipy
Destination directory: /home/moritz/.pip/wheelhouse
Complete output from command /home/moritz/.virtualenvs/base/bin/python -c "import setuptools;__file__='/home/moritz/.virtualenvs/base/build/scipy/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /home/moritz/.pip/wheelhouse:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/moritz/.virtualenvs/base/build/scipy/setup.py", line 237, in <module>
setup_package()
File "/home/moritz/.virtualenvs/base/build/scipy/setup.py", line 225, in setup_package
from numpy.distutils.core import setup
ImportError: No module named numpy.distutils.core
----------------------------------------
Failed building wheel for scipy
Failed to build scipy
Cleaning up...
because numpy is not globally present and while building the wheel works when a virtualenv with numpy installed is active, it seems like a terrible idea to have the wheel depend on a specific virtualenv's version of numpy.
pandas which also depends on numpy appears to install its own components of numpy but I'm not sure that's the best solution.
I could install numpy with --user and use that to build the scipy wheel. Are there better options?
Problem description
Have a python package (like scipy), which is dependent on other packages (like numpy) but setup.py is not declaring that requirement/dependency.
Building a wheel for such a package will succeed in case, current environment provides the package(s) which are needed.
In case, required packages are not available, building a wheel will fail.
Note: Ideal solution is to correct the broken setup.py by adding there required package declaration. But this is mostly not feasible and we have to go another way around.
Solution: Install required packages first
The procedure (for installing scipy which requires numpy) has two steps
build the wheels
use the wheels to install the package you need
Populate wheelhouse with wheels you need
This has to be done only once and can be then reused many times.
have properly configured pip configuration so that installation from wheels is allowed, wheelhouse directory is set up and overlaps with download-cache and find-links as in following example of pip.conf:
[global]
download-cache = /home/javl/.pip/cache
find-links = /home/javl/.pip/packages
[install]
use-wheel = yes
[wheel]
wheel-dir = /home/javl/.pip/packages
install all required system libraries for all the packages, which have to be compiled
build a wheel for required package (numpy)
$ pip wheel numpy
set up virtualenv (needed only once), activate it and install there numpy:
$ pip install numpy
As a wheel is ready, it shall be quick.
build a wheel for scipy (still being in the virtualenv)
$ pip wheel scipy
By now, you will have your wheelhouse populated with wheels you need.
You can remove the temporary virtualenv, it is not needed any more.
Installing into fresh virtualenv
I am assuming, you have created fresh virtualenv, activated it and wish to have scipy installed there.
Installing scipy from new scipy wheel directly would still fail on missing numpy. This we overcome by installing numpy first.
$ pip install numpy
And then finish with scipy
$ pip install scipy
I guess, this could be done in one call (but I did not test it)
$ pip install numpy scipy
Repeatedly installing scipy of proven version
It is likely, that at one moment in future, new release of scipy or numpy will be released and pip will attempt to install the latest version for which there is no wheel in your wheelhouse.
If you can live with the versions you have used so far, you shall create requirements.txt stating the versions of numpy and scipy you like and install from it.
This shall ensure needed package to be present before it is really used.
Related
I have a project which needs to depend on the latest commit of pysam, because I'm working in python 3.11.
This means building the package from source, so I do the following:
poetry add git+https://github.com/pysam-developers/pysam
However, I get an error which I think boils down to poetry not including cython in the build environment:
Unable to determine package info for path: /Users/agreen/Library/Caches/pypoetry/virtualenvs/rnacentral-pipeline-GU-1IkEM-py3.11/src/pysam
Fallback egg_info generation failed.
Command ['/var/folders/sg/3858brmd79z4rz781g0q__940000gp/T/tmpw8auvhsm/.venv/bin/python', 'setup.py', 'egg_info'] errored with the following return code 1, and output:
# pysam: no cython available - using pre-compiled C
Traceback (most recent call last):
File "/Users/agreen/Library/Caches/pypoetry/virtualenvs/rnacentral-pipeline-GU-1IkEM-py3.11/src/pysam/setup.py", line 345, in <module>
raise ValueError(
ValueError: no cython installed, but can not find pysam/libchtslib.c.Make sure that cython is installed when building from the repository
Cython is definitely installed, its in the pyproject.toml, and I can call it from the poetry shell, or import it in a python started in the poetry virtualenv. However, If I use the python from the command poetry is running, then indeed cython is not available.
I think I'm missing some configuration of the build, or some extra option to poetry add. The documentation isn't particularly clear about this use of cython - as far as I can tell it's all about using cython in the package I'm writing, which is not quite what I want.
Cython is a build dependency of pysam, but apparently pysam does not have a pyproject.toml and thus does not declare its build dependencies (Cython and maybe others). So this is a dead end.
If I were you I would build a wheel of pysam myself and tell Poetry to use this wheel until pysam releases wheels for Python 3.11 on PyPI themselves. Or I would use Python 3.10.
It seems like it is being worked on: https://github.com/pysam-developers/pysam/pull/1168
I tried importing
from sktime.transformers.series_as_features.rocket import Rocket
When I run this, I encounter this error ---
File "C:\Users\Success\AppData\Local\Temp/ipykernel_8440/2082396040.py", line 1, in <module>
runfile('C:/Users/Success/Desktop/untitled8.py', wdir='C:/Users/Success/Desktop')
File "C:\Users\Success\anaconda3\lib\site-packages\debugpy\_vendored\pydevd\_pydev_bundle\pydev_umd.py", line 167, in runfile
execfile(filename, namespace)
File "C:\Users\Success\anaconda3\lib\site-packages\debugpy\_vendored\pydevd\_pydev_imps\_pydev_execfile.py", line 25, in execfile
exec(compile(contents + "\n", file, 'exec'), glob, loc)
File "C:/Users/Success/Desktop/untitled8.py", line 11, in <module>
from sktime.transformers.series_as_features.rocket import Rocket
ModuleNotFoundError: No module named 'sktime.transformers
I ran into a similar problem. They have moved the actual location of the module in the library. Trying this path should fix the issue you have:
from sktime.transformations.panel.rocket import Rocket
The package was not installed.
There are various ways one can install sktime (See here official documentation). Will leave below two options:
Using PyPI
Using conda
Option 1: Using PyPI
For that, access the prompt for the environment that you are working on, and run
pip install sktime
To install sktime with maximum dependencies, including soft dependencies, install with the all_extras modifier:
pip install sktime[all_extras]
Option 2: Using conda
For that, access the prompt for the environment that you are working on, and run
conda install -c conda-forge sktime
To install sktime with maximum dependencies, including soft dependencies, install with the all-extras recipe:
conda install -c conda-forge sktime-all-extras
By the time I am writing this, this last one (Source):
does not include dependencies catch-22, pmdarima, and tbats. As these
packages are not available on conda-forge, they must be installed via
pip if desired. Contributions to remedy this situation are
appreciated.
Notes:
In my case when installing following Option 1, I was getting the error
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
And, for that, running, in the environment's prompt, the following, solved the problem:
pip install --ignore-installed llvmlite
If this previous one didn't work, other alternatives, such as
pip install llvmlite --ignore-installed
Or
pip install llvmlite
Might work.
If the above doesn't work for Option 2, on Anaconda Prompt one can also use pip so the following may help
pip install sktime
Note, however, that pip doesn't manage dependencies the same way conda does and can, potentially, damage one's installation.
If one is using pip on conda as per the previous note, and one gets
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'C:\Users\johndoe\anaconda3\envs\Test\Lib\site-packages\~umpy\core\_multiarray_tests.cp310-win_amd64.pyd' Consider using the --user option or check the permissions.
Doing the following should solve the issue
pip install sktime --user
I am trying to use tox to test a graphics package I am working on. One of its dependencies is pycairo, so when I set up my tox.ini file, I specify it under deps like so:
[testenv]
deps =
pycairo
...(some other packages)
and while my tests work fine on Windows, when I try testing the package on MacOS, the test always fails with the following error when I try to pip-install pycairo:
pip3 install pycairo
Collecting pycairo
Using cached pycairo-1.20.1.tar.gz (344 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing wheel metadata: started
Preparing wheel metadata: finished with status 'done'
Collecting pygame
Downloading pygame-2.0.1-cp39-cp39-macosx_10_9_intel.whl (6.9 MB)
Building wheels for collected packages: pycairo
Building wheel for pycairo (PEP 517): started
Building wheel for pycairo (PEP 517): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /Users/appveyor/projects/cpython-cmu-graphics-0l7rb/.tox/py39/bin/python /Users/appveyor/projects/cpython-cmu-graphics-0l7rb/.tox/py39/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /var/folders/5s/g225f6nd6jl4g8tshbh1ltk40000gn/T/tmpnqn0c3o6
cwd: /private/var/folders/5s/g225f6nd6jl4g8tshbh1ltk40000gn/T/pip-install-1vu11s7g/pycairo_6159cae3f6b14ec3a8681d1238fa6919
Complete output (12 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.15-x86_64-3.9
creating build/lib.macosx-10.15-x86_64-3.9/cairo
copying cairo/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/cairo
copying cairo/__init__.pyi -> build/lib.macosx-10.15-x86_64-3.9/cairo
copying cairo/py.typed -> build/lib.macosx-10.15-x86_64-3.9/cairo
running build_ext
Requested 'cairo >= 1.15.10' but version of cairo is 1.12.14
Command '['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10']' returned non-zero exit status 1.
----------------------------------------
ERROR: Failed building wheel for pycairo
Failed to build pycairo
ERROR: Could not build wheels for pycairo which use PEP 517 and cannot be installed directly
I established that the main reason I'm getting this error is because wheels and Cairo binary files are not provided for the pip installation of pycairo on MacOS. (It's worth noting that I'm running my MacOS tests via a remote VM) As such, I tried to install cairo first using Homebrew like so:
brew install cairo
However, whenever I retry the tests, I still get the same error message. I read on another SO post that you should brew install pkg-config as well, so in addition to the brew installation above, I also did:
brew install pkg-config
And still ended up with the same error message when I retried the tests. Frustrated, I once again took to Stack Overflow and discovered that you can directly install pycairo (as well as its dependencies, like cairo) with one single brew install command:
brew install py3cairo
Now, whenever I SSH'd into the Mac VM, running the test files worked, but because tox runs tests inside of virtual environments, it can't access this version of pycairo.
Now, one nasty, probably-horrible-practice, brute-force solution I found was to print out the path of the pycairo directory using this small Python script:
import os
import cairo
print(os.path.dirname(cairo.__file__))
And then I cp'd that directory into a virtual environment and found that it actually allows you to run import cairo without getting an error.
cp -r <path>/cairo venv3.9/lib/python3.9/site-packages
Python 3.9.1 (default, Dec 26 2020, 00:12:24)
[Clang 12.0.0 (clang-1200.0.32.28)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cairo
>>>
However, not surprisingly, this doesn't seem to work with any other Python minor version that I'm testing, and I wouldn't be surprised if this breaks the library in other ways I haven't discovered yet. So that's not really an acceptable solution either.
What can I do to make my tests run properly? In my tests I just want to simulate an environment that already has all the package dependencies installed, but with pycairo it doesn't seem like there's a way for me to access the package.
I just need this to work in tox for testing purposes only. I don't anticipate anyone using our package inside of a virtual environment, so our users should just be able to install py3cairo via brew directly to their system in the worst case.
Most likely, it looks like I need a way to install cairo and pkg-config such that pip inside of a virtual environment can access those files and still install the Python bindings. But I'm also open to any other suggestions that would just allow my tox tests to run. Does anyone have any thoughts on how to fix this?
Requested 'cairo >= 1.15.10' but version of cairo is 1.12.14
Your issue is not about package discoverability but an out-of-date version. If the brew-installed version of cairo is newer than 1.15.10 then you might have a separate cairo installation lying around which gets preferred over your brew-installed version.
To reproduce the issue, I did the following:
brew install cairo
python -m venv cairo
source cairo/bin/activate
pip install pycairo
which worked as expected (Python 3.9.1, pip 20.2.3).
I want to install a package with this command: pip install git+https://github.com/BioSystemsUM/mewpy.git
It collects the package, but at the end it shows:
Installing collected packages: ruamel.yaml, pathos, matplotlib, boolean.py, jmetalpy, cobamp, mewpy
Attempting uninstall: ruamel.yaml
Found existing installation: ruamel-yaml 0.15.46
ERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
I couldn't find a way to solve this issue and install this package. Any suggestion's very appreciated.
I want to use gmpy2 with python 2.7 but when I try to import it I get:
>>> import gmpy2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: libmpc.so.3: cannot open shared object file: No such file or directory
I installed gmpy2 using pip: pip install -user gmpy2 and the install looks ok apart from saying
Could not find .egg-info directory in install record for gmpy2
but after that it says that the install was a success.
I have installed MPC (1.0.3), GMP (6.1.1) and MPFR (3.1.4) and they all work, by which I mean I can call gcc foo.c -lmpc and gcc bar.c -lmpfr and the code compiles and works as expected. I've also got gmpy working using pip install. I think the problem will be to do with them not being installed in the default directories as I don't have sudo rights.
The directory where libmpc.so.3 is located is in the gcc call that pip spits out, I've also set CPATH and CPPFLAGS to look in my_prefix/include and LDFLAGS to look my_prefix/lib.
I don't really want to use the functionality from MPC so if there's a simple option to not install that part of gmpy2 I'd be happy with that.
I'm really confused, I've had it that pip fails to build a library and I've gone away and installed dependencies but normally once a library is passed pip it works.
I maintain gmpy2 and there are a couple of command line options that can be passed to setup.py that may help. I can't test the pip syntax right now but here are some options:
--shared=/path/to/gmp,mpfr,mpc will configure gmpy2 to load the libraries from the specified directory.
--static or --static=/path/to/gmp,mpfr,mpc will create a statically linked version of gmpy2 if the proper libraries can be found.
You can also try a build using setup.py directly. It may produce better error messages. Again, untested command:
python setup.py build_ext --static=/path/to/gmp,mpfr,mpc should compile a standalone, staticly linked gmpy2.so which will need to moved to the appropriate location.
Update
I've been able to test the options to pip.
If you are trying to use versions of GMP, MPFR, and MPC that are not those provided by the Linux distribution, you will need to specify the location of the new files to the underlying setup.py that is called by pip. For example, I have updated versions installed locally in /home/case/local. The following command will configure gmpy2 to use those versions:
pip install --install-option="--shared=/home/case/local" --user gmpy2
To compile a statically linked version (for example, to simplify distribution to other systems in cluster), you should use the following:
pip install --install-option="--static=/home/case/local" --user gmpy2
setup.py will use the specified base directory to configure the correct INCLUDE path (/home/case/local/include) and runtime library path (/home/case/local/lib).
Try to do the following as it might me fixed in an older version:
pip install --upgrade setuptools pip
pip uninstall gmpy2
pip install gmpy2