I'm using xmlsec 1.3.3 in my python web application.
Every time I run a clean pip install, this is the package it hangs on, for about 5 minutes.
The package size is 15KB and pip show a using cached... message, so I guess the time is taken by building some specific security libraries.
Is there a way to do a clean pip install, but without rebuilding the xmlsec related libraries?
xmlsec is distributed in source code only but it's written in C so pip needs to compile it on every fresh installation. It's not possible to not compile it.
You can pre-compile it yourself if you use one specific platform and always install from your package instead of PyPI.
Related
I'm trying to install Lazypredict, an autoML python library, on macOS 10.14.6.
Thus, I simply run
"pip install lazypredict" in the terminal , it wants to install lightgbm, but always fails to do so. I think it tries to install lightgbm through pip although I have already installed it with brew (as recommendend).
Hence i get such errors and really don't know what to do. I already have CMake installed too
Do you have ideas of what could enable me to install lazypredict ?
PS: Same happenned with other auto ML packages such as PyCaret
The errors I get are the following:
ERROR: Failed building wheel for lightgbm
Running setup.py clean for lightgbm
Failed to build lightgbm
Installing collected packages: lightgbm
Running setup.py install for lightgbm ... error
ERROR: Command errored out with exit status 1:
Exception: Please install CMake and all required dependencies first
SOLUTION 1 (Recommended)
LazyPredict works with specific versions of other libraries, I recommend you to work with Google Colab or Kaggle Notebook. They create a separate environment having multiple versions of other libraries, when you install lazypredict on colad or kaggle you won't face any issues.
One more reason to use lazypredict on Colab or Kaggle is that this library is just for testing purposes you cannot use this library for deployment, and it won't work on large datasets.
SOLUTION 2
As I mentioned lazypredict depends on specific versions of other libraries, you can search on google and will find a list of those libraries along with their version, try to manually install those versions but it is a time-consuming step, the other way is to install a new python version within which you can create a Virtual Environment and then try to install lazypredict in it.
SOLUTION 3 (only perform if you know docker)
If you are aware of Docker you can install a python image in it, a fresh install that does not contain any libraries, so you can install lazypredict in it.
I was just preparing to make a voice assistant and an error occurred while I was installing the ecapture module in python. I used pip for installing and the error is as shown below.
Failed to build scikit-image
ERROR: Could not build wheels for scikit-image, which is required to install py.project.toml-based projects
I have tried to install it from PyPI
even I do have tried to restart my computer, reinstall python, etc.
but it doesn't just work.
Note: only use this answer if you trust binaries built by Christoph Gohlke, who maintains an excellent index of binaries here https://www.lfd.uci.edu/~gohlke/pythonlibs/
You can either grab the needed packages from there manually, or use this package (which I wrote, full disclosure):
pip install gohlkegrabber
ggrab . scikit-image
pip install scikit_image-0.19.0-cp310-cp310-win_amd64.whl
pip install ecapture
Note that the package you were lacking is scikit-image - you may be able to find binaries elsewhere as well, the site above is only provided as a suggestion. Again, only use if you trust the author.
Also note that the package was called scikit_image-0.19.0-cp310-cp310-win_amd64.whl for me, as I'm on Python 3.10 on 64-bit Windows. Yours may have a different name (if available), but the ggrab command will tell you.
Finally note that 0.19.0 just happens to be the most recent build on that site - it's not guaranteed to have the latest build, or to have the latest build for your OS/version of Python.
Whenever I install a new package (without using the --skip-lock option), pipenv downloads and (in case of non-binary dependencies) compiles all the packages from the scratch, even though the wheels are already cached in ~/.cache/pipenv. This makes the whole development process slow, since I have a lot of packages that need to be compiled from the source.
Currently I download and compile my packages using pip, use pypi-server to run a local package server, and point my pipenv to it (using [[source]]). But I'm wondering if there is a better way.
The following takes place in a Python 3 virtual environment.
I just authored a little package that requires numpy. So, in setup.py, I wrote install_requires=['numpy']. I ran python3 setup.py install, and it took something like two minutes -- I got the full screen dump of logs, warnings, and configurations that normally comes with a numpy installation.
Then, I created a new virtual environment, and this time simply wrote pip3 install numpy -- which took only a few seconds -- and then ran python3 setup.py install, and I was done almost immediately.
What's the difference between the two, and why was pip3 install numpy so much faster? Should I thus include a requirements.txt just so people can pip-install the requirements rather than using setuptools?
Note that when I wrote pip3 install numpy, I got the following:
Collecting numpy
Using cached numpy-1.12.0-cp36-cp36m-manylinux1_x86_64.whl
Installing collected packages: numpy
Successfully installed numpy-1.12.0
Is it possible that this was so much faster because the numpy wheel was already cached?
pip install uses wheel packages, which were designed partly with the purpose of speeding up the installation process.
The Rationale section of PEP 427, which introduced the wheel format, states:
Python needs a package format that is easier to install than sdist.
Python's sdist packages are defined by and require the distutils and
setuptools build systems, running arbitrary code to build-and-install,
and re-compile, code just so it can be installed into a new
virtualenv. This system of conflating build-install is slow, hard to
maintain, and hinders innovation in both build systems and installers.
Wheel attempts to remedy these problems by providing a simpler
interface between the build system and the installer. The wheel binary
package format frees installers from having to know about the build
system, saves time by amortizing compile time over many installations,
and removes the need to install a build system in the target
environment.
Installing from a wheel is faster since it is a Built Distribution format:
Built Distribution
A Distribution format containing files and metadata that only need to be moved to the correct location on the target system, to be
installed. Wheel is such a format, whereas distutil’s Source
Distribution is not, in that it requires a build step before it can be
installed. This format does not imply that python files have to be
precompiled (Wheel intentionally does not include compiled python
files).
Since numpy's source distribution contains significant amount of C code, compiling it takes noticeable time, which you observed when you installed it via bare setuptools. pip avoided the compilation of C code since the wheel came with binary code (already compiled for your system).
The installation information page of PyCryptodome says the following under the "Windows (pre-compiled)" section:
Install PyCryptodome as a wheel:
pip install pycryptodomex
To make sure everything works fine, run the test suite:
python -m Cryptodome.SelfTest
There are several problems with this though:
Contrary to what these instructions say, this will not install PyCryptoDome as a wheel, but it will rather download it and try to build it, resulting in an error if you don't have the correct build environment installed for the C components included in this package (and the entire mess related to this is the biggest benefit of using a wheel instead to begin with).
Even if I instead download the correct wheel file from PyCryptoDome's PyPi page, I must (as far as I know?) instead use a command-line as follows to install it:
pip install c:\some\path\name-of-wheel-file.whl
This in turn makes it install under the default "Crypto" package instead of the "Cryptodome" package explicitly mentioned in the instructions (and therefore colliding in a breaking fashion with any pre-existing installations of the PyCrypto package).
So, my question is:
Is there any way to install a wheel file under a different package name than the default one?
PyCryptodome does not seem to provide any specific wheel files for installing under this alternative package name, so if this is impossible, I have a big problem (because I already have PyCrypto installed). :-(
PS.
Some more context regarding the need for the alternative package name can be provided by the following quote from the same installation page that is linked above:
PyCryptodome can be used as:
1.
a drop-in replacement for the old PyCrypto library. You install it with:
pip install pycryptodome
In this case, all modules are installed under the Crypto package. You can test everything is right with:
python -m Crypto.SelfTest
One must avoid having both PyCrypto and PyCryptodome installed at the same time, as they will interfere with each other.
This option is therefore recommended only when you are sure that the whole application is deployed in a virtualenv.
2.
a library independent of the old PyCrypto. You install it with:
pip install pycryptodomex
You can test everything is right with:
python -m Cryptodome.SelfTest
In this case, all modules are installed under the Cryptodome package. PyCrypto and PyCryptodome can coexist.
So, again, all I want is to install it as described under alternative 2 in this quote, from a wheel file, but the problem is that the provided wheel files seem to only default to the package name described under alternative 1 in this quote (i.e. "Crypto").
As far as I know this is not possible. The only way to achieve this by recompiling the wheel yourself after you modified its name in the setup.py.