Combining Sphinx documentation from multiple subprojects: Handling indices, syncing configuration, etc - python

We have a multi-module project documented with the (excellent) Sphinx. Our setup is not unlike one described on the mailing list. Overall this works great! But we have a few questions about doing so:
The submodule tables of contents will include index links. At best these will link to the wrong indices. (At worst this seems to trigger a bug in Sphinx, but I'm using the devel version so that's reasonable). Is there a way of generating the index links only for the topmost toctree?
Are there best practices for keeping the Sphinx configuration in sync between multiple projects? I could imagine hacking something together around from common_config import *, but curious about other approaches.
While we're at it, the question raised in the mailing list post (alternative to symlinking subproject docs?) was never answered. It's not important to me, but it may be important to other readers.

I'm not sure what you mean by this. Your project's index appears to be just fine. Could you clarify on this, please?
As far as I've seen, from common_config import * is the best approach for keeping configuration in sync.
I think the best way to do this is something like the following directory structure:
main-project/
conf.py
documentation.rst
sub-project-1/
conf.py - imports from main-project/conf.py
documentation.rst
sub-project-2/
conf.py - likewise, imports from main-project/conf.py
documentation.rst
Then, to just package sub-project-1 or sub-project-2, use this UNIX command:
sphinx-build main-project/ <output directory> <paths to sub-project docs you want to add>
That way, not only will the main project's documentation get built, the sub-project documentation you want to add will be added as well.
To package main-project:
sphinx-build main-project/ <output directory>
I'm pretty sure this scheme will work, but I've yet to test it out myself.
Hope this helps!

Regarding point 2 (including common configuration), I'm using:
In Python 2:
execfile (os.path.abspath("../../common/conf.py"))
In Python 3:
exec (open('../../common/conf.py').read())
Note that, unlike the directory structure presented by #DangerOnTheRanger, I prefer to keep a separate directory for common documentation, which is why common appears in the path above.
My common/conf.py file is a normal Sphynx file. Then, each of the specific documentation configuration includes that common file and overrides values as necessary, like in this example:
import sys
import os
execfile (os.path.abspath("../../common/conf.py"))
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
]
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# If true, links to the reST sources are added to the pages.
html_copy_source = False
html_show_sourcelink = False

Related

What is the simplest way to generate a pickles report out of the test results of behave?

I am writing behavior tests in python with behave. I would like to generate a pickles report. What is the simplest way to achieve that?
There multiple possibilities, each coming with its own advantages and inconvenients.
Pickles supports xUnit, NUnit, and cucumber-json reports, among others. So I need to find a way to get a report from behave in one of those formats. xUnit and NUnit are .NET-specific formats, so they have no relation with python. Behave supports, however, json output. Unfortunately, behave's json output is not the cucumber-json format pickles expects.
Convert JUnit test reports into NUnit reports through an XSLT transformation
Of course, behave supports JUnit and it's possible to convert the JUnit test to NUnit through an XSLT transformation. According to this post, there is a (possibly outdated) XSLT transformation here. Such a transformation can be carried out with Apache Ant, which is great, because I am working on TeamCity which comes with a build type of that kind. In Kotlin DSL, this means I could update my build step with something like
steps {
[whatever script steps or other stuff]
ant {
mode = antScript {
content = """
<target name="transform" description="Run functional tests">
<xslt in="${'$'}{PATH_TO_THE_JUNIT_FILE}" out="${'$'}{PATH_TO_THE_NUNIT_FILE}" style="junit-to-nunit.xsl" />
</target>
""".trimIndent()
}
targets = "transform"
}
}
The problem with that approach is that I need to install a lot of Java stuff on my development PC if I want to test such a script and I am not very fond of Java stuff. Other than that, there is this npm package that could help. It has not a lot of stars, but it might work well. To sum up, I am not very comfortable with XSLTs, therefore I don't like this solution.
Use behave2cucumber
The package name sounds really great, and using it seems very easy. So this was a priori my preferred solution. It is as simple as
pip install behave2cucumber
behave --format=json -o json-report.json /path/to/features
python -m behave2cucumber -i json-report.json -o cucumber-report.json
If that worked, I would've been so happy, because essentially everything is available from pypi, sounds really great. However, all my Background steps were flagged as skipped in my cucumber-report.json output, making that solution a no-go, because it doesn't show the right status of my scenarios.
Write my own custom formatter
Following this github issue, I stumbled upon some explanations and some code (same code here). Here are the steps it took me to make it work:
copy / paste the code into file venv/Lib/site-packages/behave/formatter/cucumber_json.py
run
behave --format=behave.formatter.cucumber_json:PrettyCucumberJSONFormatter -o cucumber-report.json /path/to/features
And that works like a charm. Because I need this formatter in many projects, my next steps will be to produce a python package that I can install in those projects and reference something like this:
# run_behave.py
import os
from behave.configuration import Configuration
from behave.formatter import _registry
from behave.runner import Runner
from my_formatter_package import PrettyCucumberJSONFormatter
here = os.path.dirname(os.path.abspath(__file__))
_registry.register_as("PrettyCucumberJSONFormatter", PrettyCucumberJSONFormatter)
configuration = Configuration(junit_directory="behave_reports")
configuration.format = ["PrettyCucumberJSONFormatter"]
configuration.stdout_capture = False
configuration.stderr_capture = False
configuration.paths = [os.path.join(here, "features")]
Runner(configuration).run()
And that should do the trick. The inconvenient of that approach is that I need to maintain my own formatter. I don't know what new behave versions will change in the json output in the future and how the cucumber-json schema will evolve with time. Hopefully this formatter will get integrated to behave.

Get resource from python resource root

I am using the PyCharm IDE. I marked a folder as a resource root and wanted to get a file from its directory and was wondering the appropriate way to do so.
In Java, you can use getClass().getResource("/resourceName.extension")
Is there some way to get a path from a python in said manner?
Based off what you have said it sounds like you just need to include the directory of the file with a simple include statement.
for instance, if your files are set up as such:
c:program\main
c:program\resources
then you can just do a simple
import resources
However, you could run into coupling issues if you have any sub-packages. Solving the coupling issue involving resources has been gone over in more detail in another thread I have linked below.
Managing resources in a Python project
What I want can be accomplished by this answer.
I used the code as follows:
os.path.join(os.path.dirname(__file__), '../audio/music.wav'

I'm having trouble understanding importing in python3

I've looked on many sites and many related questions, but following the solutions to those questions still didn't seem to help. I figured maybe I am missing something, so here goes.
My project is to create a DM's tool for managing table top role playing games. I need to be able to split my project into many different files in order to keep everything organized. (so far) I have only three files I'm trying to work with. I have my main file which I called dmtool.py3, I have a file for class definitions called classdef.py3, and I have a file for creating race objects called races.py3.
1] The first of my questions is regarding importing singular files. I've tried organizing the files in several different ways, so for this lets assume all of my three files are in the same directory.
If I want to import my class definitions from classdef.py3 into my main file dmtool.py3, how would I do that? import classdef and import classdef.py3 do not seem to work properly saying their is no module with that name.
2] So I then made a module, and it seemed to work. I did this by creating a sub-directory called defs and putting the classdef.py3 and races.py3 files into it. I created the __init__.py3 file, and put import defs in dmtool.py3. As a test I put x = 1 at the very top of races.py3 and put print("X =", defs.x) in dmtool.py3. I get an error saying that module doesn't have an attribute x.
So I guess my second question is if it is possible to just use variables from other files. Would I use something like defs.x or defs.races.x or races.x or maybe simply x? I can't seem to find the one that works. I need to figure this out because I will be using specific instances of a class that will be defined in the races.py3 file.
3] My third question is a simple one that kind of spawned from the previous two. Now that races.py3 and classdef.py3 are in the same module, how do I make one access the other. races.py3 has to use the classes defined in classdef.py3.
I would really appreciate any help. Like I said I tried looking up other questions related to importing, but their simple solutions seemed to come up with the same errors. I didn't post my specific files because other than what I mentioned, there is just very simple print lines or class definitions. Nothing that should affect the importing.
Thanks,
Chris
Firstly, do not use .py3 as a file extension. Python doesn't recognize it.
Python 3's import system is actually quite simple. import foo looks through sys.path for a package (directory) or module (.py file) named foo.
sys.path contains various standard directories where you would normally install libraries, as well as the Python standard library. The first entry of sys.path is usually the directory in which the __main__ script lives. If you invoke Python as python -m foo.bar, the first entry will instead be the directory which contains the foo package.
Relative imports use a different syntax:
from . import foo
This means "import the foo module or package from the package which contains the current module." It is discussed in detail in PEP 328, and can be useful if you don't want to specify the entire structure of your packages for every import.
Start python and type these commands:
>>> import sys
>>> sys.path
The path is the list of directories where python looks for libraries. If your modules are not on the list, none are found.

Proper style of coding __init__.py in Python modules

I'm not that fluent in Python so I'm not sure if what I'm doing is common practice or the proper way to do it.
I'm creating a module archive contaning files with one class each, e.g. SmsArchiveReader.py with class SmsArchiveReader inside. To make the imports less tedious, I decided to import the classes directly into the __init__.py.
However, both Spyder and Pylint have issues with my __init__.py, with Spyder telling me that I shouldn't have unused imports, and Pylint telling me that I shouldn't use absolute imports. Both suggestions seem pointless to me, since this is __init__.py we're talking about, but I'm open to suggestions.
Image below:
As for the look I wanted to achieve, I wanted the code using this module to look like that:
import archive
myReader = archive.SmsArchiveReader()
myReader2 = archive.FooArchiveReader()
instead of:
import archive
myReader = archive.SmsArchiveReader.SmsArchiveReader()
myReader2 = archive.FooArchiveReader.FooArchiveReader()
So what's the correct practice of creating modules?
As jonrsharpe said, it's a problem with Spyder IDE. This issue has been submitted to their bug tracker, the interested can follow its status on Github.

Bundling Multiple Python Modules

I have some Python modules, some of them require more than 20 others.
My question is, if there a tool is which helps me to bundle some Python modules to one big file.
Here a simple example:
HelloWorld.py:
import MyPrinter
MyPrinter.displayMessage("hello")
MyPrinter.py:
def displayMessage(msg):
print msg
should be converted to one file, which contains:
def displayMessage(msg):
print msg
displayMessage("hello")
Ok, I know that this example is a bit bad, but i hope that someone understand what i mean and can help me. And one note: I talk about huge scripts with very much imports, if they were smaller I could do that by myself.
Thanks.
Aassuming you are using Python 2.6 or later, you could package the scripts into a zip file, add a __main__.py and run the zip file directly.
If you really want to collapse everything down to a single file, I expect you're going to have to write it yourself. The source code transformation engine in lib2to3 may help with the task.
You cannot and should not 'convert them into one file'.
If your application consists of several modules, you should just organize it into package.
There is pretty good tutorial on packages here: http://diveintopython3.org/packaging.html
And you should read docs on it here: http://docs.python.org/library/distutils.html
Pip supports bundling. This is an installation format, and will unpack into multiple files. Anything else would be a bad idea, as it would break imports and per-module metadata.

Categories

Resources