Sphinx extensions on readthedocs - python

Recently I started using the sphinx_autodoc_typehints and the sphinx_autodoc_defaultargs extensions for Sphinx via my project's conf.py. As it seems are not default packages in the sphinx installation on readthedocs (over there sphinx is on v1.8.5). Because my build fails with an Extension error shown here:
Could not import extension sphinx_autodoc_typehints (exception: No module named
'sphinx_autodoc_typehints')
I understand I somehow have to tell readthedocs to get sphinx_autodoc_typehints (and later sphinx_autodoc_defaultargs as well) from PyPI. Or is there a way I can install packages on readthedocs myself?
Since I use pbr for package management I use a requirements.txt that readthedocs knows of. I don't want to specify the sphinx extensions there because every user of my package would have to install them. Is there no other way of telling readthedocs which extensions to use?

Following the comment of Steve Piercy, I found a way to have two requiremens.txt. readthedocs' advanced settings (on the website) allow for one requirements.txt only.
readthedocs' preferred way are .readthedocs.yaml, which needs to live in the root folder. Following https://docs.readthedocs.io/en/stable/config-file/v2.html, this is how the file now looks like:
version: 2
sphinx:
configuration: docs/conf.py
python:
version: 3.7
install:
- requirements: requirements.txt
- requirements: docs/requirements.txt
and the docs/requirements.txt looks like this:
sphinx==3.4.3
sphinx_autodoc_typehints==1.12.0
sphinx_autodoc_defaultargs==0.1.2
In the advanced settings page I had to manually set the location of sphinx' conf.py, although it's a standard location. Without this setting my build would still fail.

Related

How to use a local flake8 plugin with a python virtual env?

I'm trying to integrate a project with a flake8 plugin I wrote as a local plugin (not a PyPI package for example) as explained here. The project uses a virtual env, both locally and as a github workflow. Since flake8 is invoked from within the virtual env it can't find the plugin, which resides as a folder under the project root. When I manually add the plugin code to the virtual env folder it integrates nicely and flake8 is able to find to execute it.
The solution smells like some kind of github pre-commit-config/hook, but I can't find any reference in the docs for this usecase. Currently flake8 is configured in pre-commit-config like so:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.2.0
hooks:
- id: debug-statements
- repo: https://github.com/PyCQA/flake8
rev: 4.0.1
hooks:
- id: flake8
additional_dependencies: ['dlint']
Is there a way to use my non-packaged flake8 plugin with a virtual env locally / in a github workflow?
you'll want to configure paths to make sure that flake8 puts the appropriate directories on sys.path to discover your plugin
this is mentioned in the docs you linked, just a little bit further down:
However, if you are working on a project that isn’t set up as an installable package, or Flake8 doesn’t run from the same virtualenv your code runs in, you may need to tell Flake8 where to import your local plugins from. You can do this via the paths option in the local-plugins section of your config:
[flake8:local-plugins]
extension =
MC1 = myflake8plugin:MyChecker1
paths =
./path/to
note that if your local plugin has dependencies it needs, those will also be needed to be listed in additional_dependencies in your pre-commit configuration
disclaimer: I'm the current flake8 maintainer, and I created pre-commit
Turns out the paths property was configured wrong and should have been
paths =
. flake8_plugins/
instead of
paths =
.flake8_plugins/

ReadTheDocs + Sphinx + setuptools_scm: how to?

I have a project where I manage the version through git tags.
Then, I use setuptools_scm to get this information in my setup.py and also generates a file (_version.py) that gets included when generating the wheel for pip.
This file is not tracked by git since:
it has the same information that can be gathered by git
it would create a circular situation where building the wheel will modify the version which changes the sources and a new version will be generated
Now, when I build the documentation, it becomes natural to fetch this version from _version.py and this all works well locally.
However, when I try to do this within ReadTheDocs, the building of the documentation fails because _version.py is not tracked by git, so ReadTheDocs does not find it when fetching the sources from the repository.
EDIT: I have tried to use the method proposed in the duplicate, which is the same as what setuptools_scm indicate in the documentation, i.e. using in docs/conf.py:
from pkg_resources import get_distribution
__version__ = get_distribution('numeral').version
... # I use __version__ to define Sphinx variables
but I get:
pkg_resources.DistributionNotFound: The 'numeral' distribution was not found and is required by the application
(Again, building the documentation locally works correctly.)
How could I solve this issue without resorting to maintaining the version number in two places?
Eventually the issue was that ReadTheDocs did not have the option to build my package active by default and I was expecting this to happen.
All I had to do was to enable "Install Project" in the Advanced Settings / Default Settings.

Is there a way to rename a python package upon installation?

The Problem
I am working on a project that uses a package in beta with multiple versions (package name: psychxr). After some confusing error messages about missing modules, I have discovered that depending on where I source my installation from I get different package contents.
If I use pip to install psychxr, I get an ovr sub-package. However, if I install from source (via official github repository), I get a libovr sub-package. Is there a way I can rename the source package such that I can get both modules? Alternatively, is there a better way to go about this? Although the packages complete roughly the same task, their implementations are noticeably different, and I'd like access to both.
CMD output of >>>python -c help('psychxr')
Version 1 (OVR)
NAME
psychxr
PACKAGE CONTENTS
ovr (package)
VERSION
0.1.4
Version 2 (LIBOVR)
NAME
psychxr
PACKAGE CONTENTS
libovr (package)
VERSION
0.2.0
Post Script: I do apologize for any misuse of terminology, or illegibility. I'm fairly new to both python and cmd in windows.

packaging python application for linux

I have made a GUI application using python and PyQt5. I want to package this app but there doesn't seems to be a straight forward way to do this. Moreover what I have found answers to is to package a python module and not an application. I have read various articles and the official docs but still don't seem to have a proper answer to this, though there are several workarounds through which I could achieve the same, I just want to know what is the standard way.
This is my directory structure :
Moodly/
Moodly/
__init__.py
controller.py
logic.py
models.py
view.py
resoure.py
style.py
sounds/
notify.wav
message.wav
setup.py
MANIFEST.in
setup.cfg
run.py
moodly.png
Moodly.desktop
What do I want to achieve: The user is given with a tar file of Moodly. The user extracts it, runs the command
python setup.py install
in the terminal, the setup places all the files in the proper place and creates a Moodly.desktop file probably in usr/local/share/applications clicking on which user can run the app.
My way of achieving this:
setup.py
from setuptools import setup
setup(
name="Moodly",
version="1.0",
author="Akshay Agarwal",
author_email="agarwal.akshay.akshay8#gmail.com",
packages=["Moodly"],
include_package_data=True ,
url="http://github.com/AkshayAgarwal007/Moodly",
entry_points = {
'gui_scripts': [
'moodly = Moodly.controller:main',
],
},
# license="LICENSE.txt",
description="Student Intimation system",
# long_description=open("README.txt").read(),
# Dependent packages (distributions)
)
MANIFEST.in
include Moodly/sounds/notify.wav
include Moodly/sounds/message.wav
Now with no setup.cfg I run the command:
python setup.py install
This succesfully installs Moodly to /usr/lib/python-3.4/site-packages
alongwith the sounds directory.And now from the terminal when I type in moodly(as specified in entry points in setup.py) my GUI application launches successfully.
Now I just need the setup to create the Moodly.desktop alongwith moodly.png in usr/local/share/applications which I am trying to achieve through this:
setup.cfg
[install]
install_data=/usr/local/share/applications
Adding this to setup.py
data_files = [("Moodly", ["moodly.png","Moodly.desktop",])],
But this somehow seems to copy the files inside python-3.4/site-packages/Moodly rather than the specified destination but it used to work well with distutils
This guy also seems to have faced the same issue
Some other links I have used:
python-packaging
starting with distutils
So the way I am trying to do it , how much of it is correct and what is the standard way to do it. How can I possibly place that Moodly.desktop in the right place or what could be a better alternative way to do the entire process.
Moreover would using Pyinstaller be a better idea. Pyinstaller would package the app with PyQt5, requests and beautifulsoup4 (external modules that I have used) which I don't want. I want to use the install_requires option provided by setuptools and not unnecessary make the user download the modules which they already might have.
The .desktop file isn't supposed to be installed using Distutils. Distutils is only concerned with installing Python packages.
To install .desktop files, icons and other files incidental to distribution level packaging, you should look at build automation systems, such as CMake.
The first step in this process is to get CMake to build a Python project. You should take a look here for how to do that: https://bloerg.net/2012/11/10/cmake-and-distutils.html
Beyond that, installing .desktop files is easy. Assuming you've written a .desktop file and put it somewhere, installing it is a matter of doing:
install(PROGRAMS com.akshay.moodly.desktop DESTINATION ${XDG_APPS_INSTALL_DIR})
in your CMakeLists.txt file.
Note that you install the .desktop file to ${XDG_APPS_INSTALL_DIR} (that's a CMake variable), not a hardcoded path like /usr/local/share/applications or something. The user (and pretty much every automated distro package builder) will always install your package to a temporary path and then copy files over into their packages. Never assume that your app will live in /usr/bin or /usr/local/bin or whatever. The user could install things into /opt/Moodly or even $HOME/Moodly.

How to make setup.py install a different module

I would like to use distutils (setup.py) to be able to install a python package (from a local repository), which requires another package from a different local repository. Since I am lacking decent documentation of the setup command (I only found some examples
here and here, confused by setup-terms extras_require, install_require and dependency_links found here and here), does anyone have a complete setup.py file that shows how this can be handled, i.e. that distutils handles the installation of a package found in some SVN repository, when the main package I am installing right now requires that?
More detailed explanation: I have two local svn (or git) repositories basicmodule and extendedmodule. Now I checkout extendedmodule and run python setup.py install. This setup.py files knows that extendedmodule requires basicmodule, and automatically downloads it from the repository and installs it (in case it is not installed yet). How can I solve this with setup.py? Or maybe there is another, better way to do this?
EDIT: Followup question
Based on the answer by Tom I have tried to use a setup.py as follows:
from setuptools import setup
setup(
name = "extralibs",
version = "0.0.2",
description = ("Some extra libs."),
packages=['extralib'],
install_requires = "basiclib==1.9dev-r1234",
dependency_links = ["https://source.company.xy/svn/MainDir/SVNDir/basiclib/trunk#20479#egg=basiclib-1.9dev-r1234"]
)
When trying to install this as a normal user I get the following error:
error: Can't download https://source.company.xy/svn/MainDir/SVNDir/basiclib/trunk#20479: 401 Authorization Required
But when I do a normal svn checkout with the exact same link it works:
svn co https://source.company.xy/svn/MainDir/SVNDir/basiclib/trunk#20479
Any suggestion how to solve this without changing ANY configuration of the svn repository?
I think the problem is that your svn client is authentified (caching realm somewhere in ~/.subversion directory) what your distutils http client don't know how to do.
Distutils supports svn+http link type in dependency links. So you may try adding "svn+" before your dependency link providing username and password:
dependency_links =
["svn+https://user:password#source.company.xy/svn/MainDir/SVNDir/basiclib/trunk#20479#egg=basiclib-1.9dev-r1234"]
For security reasons you should not put your username and password in your setup.py file. One way to do that it fetching authentication information from an environment variable or event try to fetch it from your subversion configuration directory (~/.subversion)
Hope that help
Check out the answers to these two questions. They both give specific examples on how install_requires and dependency_links work together to achieve what you want.
Can Pip install dependencies not specified in setup.py at install time?
Can a Python package depend on a specific version control revision of another Python package?

Categories

Resources