I want to publish the documentation of my project https://bitbucket.org/oaltun/opn in readthedocs.org.
The build fails. There are different errors shown in the log https://readthedocs.org/builds/opn/2247789/ , but the first is "no module named sip".
sip is needed by pyqt, which is needed by the project.
Normally in this kind of situation, as far as I understand, you would add missing package to your setup.py, and check the readthedocs.org option to create a virtualenv. I do check the box to create a virtualenv. But I can not add sip or pyqt to setup.py.
The problem is pyqt & sip does not use setuptools, so can not be installed by pip. So you can not add them to setup.py (This fails even in my local machine).
In my local environment I install pyqt with (ana)conda. But I think readthedocs.org uses pip for calling the dependencies.
So, how can I have my virtualenv include sip?
The trick is to mock these interfaces:
import mock
MOCK_MODULES = ['sip', 'PyQt4', 'PyQt4.QtGui']
sys.modules.update((mod_name, mock.MagicMock()) for mod_name in MOCK_MODULES)
Note that you must also mock the root package 'PyQt4' or will get an ImportError.
Related
I am creating a module, henceforth called mymodule, which I distribute using a pyproject.toml. This file contains a version number. I would like to write this version number in the logfile of mymodule. In mymodule I use the following snippet (in __init__.py) to obtain the version:
import importlib.metadata
__version__ = importlib.metadata.version(__package__)
del importlib.metadata
However this version is wrong. This appears to be the highest version which I have ever installed. For reference the command python3 -m pip show mypackage does actually show the correct version after installing the module locally. I struggle to explain this difference. Can anyone think of a cause of this discrepancy?
I also ran importlib.metadata.version(mypackage) which returned the same incorrect version.
The problem was related to left over build artifacts from using setup.py. importlib and pkg_resources will detect these artifacts in a local installation and pip will not. Deleting the mypackage.egg-info directory fixed the issue.
I'm working on a R package, that makes use of reticulate to call some functions of a Python package I implemented, installable through pip.
Following its documentation, I setup the reticulate automatic configuration of Python dependencies as follows, in the DESCRIPTION file of my package:
Config/reticulate:
list(
packages = list(
list(package="my_python_package", pip=TRUE)
)
)
where my_python_package is the Python package I need to use.
If I install the package locally, where I have the required Python package already installed, everything works fine.
However, if I try to install and use the R package in an environment without the Python package already installed, I get the following error:
Error in py_module_import(module, convert = convert) :
ModuleNotFoundError: No module named 'my_python_package'
Detailed traceback:
File "/home/runner/work/_temp/Library/reticulate/python/rpytools/loader.py", line 39, in _import_hook
module = _import(
as if reticulate is not able to configure correctly the environment.
Also, the Python package should not be the problem, since when it is installed, I am able to import it and use its functions with no errors.
From the reticulate documentation it seems Config/reticulate: ... is all is needed, but maybe I am missing something.
I noticed this issue https://github.com/rstudio/reticulate/issues/997 on the reticulate GitHub repository. Apparently, the automatic configuration through Config/reticulate only works if a conda environment is already loaded.
Therefore, I think the only way to configure the correct environment is on the .onLoad function:
check if Anaconda is installed, otherwhise launch the Miniconda installation through reticulate::install_miniconda()
check if the environment is already present, otherwise create create it through reticulate::conda_create(envname),
install the Python dependencies if necessary through reticulate::conda_install(envname, packages),
load the configured environment.
In this way, the first time the package is loaded, the environment will be correctly created.
After that, the environment will be automatically loaded.
I have written a utility library in Python that works with the Qt framework. My code is pure Python and is compatible with both PyQt5 and PySide2. My main module could either be run on its own from the command line with python -m or it could be imported into another project. Is there a clean way to specify that the project needs either PyQt5 or PySide2 in its a wheel distribution?
Here is what I have found in my research but I am asking in case there is a better way to package the project than these options:
I could add logic to setup.py in a source distribution of the project to check for PyQt5 and PySide2. However, wheels are recommended way to distribute Python projects, and from what I can tell this kind of install-time logic is not possible with wheels. Alternatively, I could not specify either PySide2 or PyQt5 as dependencies and recommend in the install instructions that one of them be installed together with my project.
Use extras_require:
setup(
…
extras_require={
'pyqt5': ['PyQt5'],
'pyside2': ['PySide2'],
},
)
and teach your users to run either
pip install 'yourpackage[pyqt5]'
or
pip install 'yourpackage[pyside2]'
If you don't want to make either a strict requirement (which makes sense), I'd just throw a runtime error if neither is available.
For example
try:
import PyQt5 as some_common_name
except ImportError:
try:
import PySide2 as some_common_name
except ImportError:
raise ImportError('Please install either PyQt5 or PySide2') from None
My particular case is somewhat niche (so I am not accepting this as the answer). I realized the package was really doing two things: acting as a library and as a command line tool. I decided to split it into two packages: package and package-cli. package does not explicitly depend on PyQt5 or PySide2 but specifies that one of them must be installed in the documentation. Since package is a library, it is intended to be integrated into another project where it is easy to list package and PyQt5 together in the requirements.txt. For package-cli, I just choose one of PyQt5 or PySide2 to be the explicit dependency. package-cli depends on package and PyQt5 and just adds a console_script to call the main module in package.
Some Python packages require one of two packages as a dependency. For example, Ghost.py requires either PySide or PyQt4.
Is it possible to include such a dependency in a requirements.txt file? Is there any 'or' operator that works with these files?
If not, what can I do to add these requirements to the file so only one of them will be installed?
Currently neither pip's requirement.txt nor setuptools directly allow such a construction. Both require you to specify a list of requirements. You can restrict the version of a requirement, but that's all.
Inside Python, you can handle this situation as follows:
try:
import dependency1
def do_it(x):
return dependency1.some_function(x)
except ImportError:
try:
import dependency2
def do_it(x)
return dependency2.another_function(x)
except ImportError:
raise ImportError('You must install either dependency1 or '
+ 'dependecy2!')
Now do_it uses either dependency1.some_function or dependency2.another_function, depending on which is available.
That will still leave you with the problem of how to specify your requirements. I see two options:
Don't formally specify the requirement in requirements.txt or setup.py but document that the user needs to install one of the dependencies. This approach might be OK if the setup of your software requires additional manual steps anyway (i.e. more than just pip install my_tool).
Hard-code your preferred requirement in requirements.txt or setup.py.
In the end, you have to ask yourself why people might want to use one dependency over the other: I usually couldn't care less about the dependencies of the libraries that I use, because (disk) space is cheap and (due to virtualenv) there is little risk of incompatibilities. I'd therefore even suggest you think about not supporting two different dependencies for the same functionality.
I would use a small Python script to accomplish this
#!/usr/bin/env python
packages = 'p1 p2 p3'.split()
try:
import optional1
except ImportError: # opt1 not installed
try:
import optional2
except ImportError: # opt2 not installed
packages.append('optional2')
print(' '.join(packages))
Have this script executable with
chmod +x requirements.py
And finally run pip with it like this:
pip install $(requirements.py)
The '$(requirements.py)' will execute the requirements.py script and put its output (in this case, a list of packages) into pip install ...
For setuptools, you can change the setup code to look similar to here:
https://github.com/albumentations-team/albumentations/blob/master/setup.py#L11
Where dependency1 would be installed if none of dependency1 and dependency2 is installed yet, but nothing is installed if any of them is already part of the system.
The caveat is that it doesn't work with wheels, and you need to install with --no-binary to make it work: https://albumentations.ai/docs/getting_started/installation/#note-on-opencv-dependencies
Assuming that you already have pip or easy_install installed on your python distribution, I would like to know how can I installed a required package in the user directory from within the script itself.
From what I know pip is also a python module so the solution should look like:
try:
import zumba
except ImportError:
import pip
# ... do "pip install --user zumba" or throw exception <-- how?
import zumba
What I am missing is doing "pip install --user zumba" from inside python, I don't want to do it using os.system() as this may create other problems.
I assume it is possible...
Updated for newer pip version (>= 10.0):
try:
import zumba
except ImportError:
from pip._internal import main as pip
pip(['install', '--user', 'zumba'])
import zumba
Thanks to #Joop I was able to come-up with the proper answer.
try:
import zumba
except ImportError:
import pip
pip.main(['install', '--user', 'zumba'])
import zumba
One important remark is that this will work without requiring root access as it will install the module in user directory.
Not sure if it will work for binary modules or ones that would require compilation, but it clearly works well for pure-python modules.
Now you can write self contained scripts and not worry about dependencies.
As of pip version >= 10.0.0, the above solutions will not work because of internal package restructuring. The new way to use pip inside a script is now as follows:
try: import abc
except ImportError:
from pip._internal import main as pip
pip(['install', '--user', 'abc'])
import abc
I wanted to note that the current accepted answer could result in a possible app name collision. Importing from the app namespace doesn't give you the full picture of what's installed on the system.
A better way would be:
import pip
packages = [package.project_name for package in pip.get_installed_distributions()]
if 'package' not in packages:
pip.main(['install', 'package'])
Do not use pip.main or pip._internal.main.
Quoting directly from the official documentation (boldface emphasis and editing comments mine, italics theirs):
As noted previously, pip is a command line program. While it is... available from your Python code via import pip, you must not use pip’s internal APIs in this way. There are a number of reasons for this:
The pip code assumes that [it] is in sole control of the global state of the program. pip manages things like... without considering the possibility that user code might be affected.
pip’s code is not thread safe. If you were to run pip in a thread, there is no guarantee that either your code or pip’s would work as you expect.
pip assumes that once it has finished its work, the process will terminate... calling pip twice in the same process is likely to have issues.
This does not mean that the pip developers are opposed in principle to the idea that pip could be used as a library - it’s just that this isn’t how it was written, and it would be a lot of work to redesign the internals for use as a library [with a] stable API... And we simply don’t currently have the resources....
...[E]verything inside of pip is considered an implementation detail. Even the fact that the import name is pip is subject to change without notice. While we do try not to break things as much as possible, all the internal APIs can change at any time, for any reason....
...[I]nstalling packages into sys.path in a running Python process is something that should only be done with care. The import system caches certain data, and installing new packages while a program is running may not always behave as expected....
Having said all of the above[:] The most reliable approach, and the one that is fully supported, is to run pip in a subprocess. This is easily done using the standard subprocess module:
subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'my_package'])
It goes on to describe other more appropriate tools and approaches for related problems.