How to get a list of warnings from sphinx compilation - python

I am developing a sphinx based collaborative writing tool. Users access the web application (developed in python/Flask) to write a book in sphinx and compile it to pdf.
I have learned that in order to compile a sphinx documentation from within python I should use
import sphinx
result = sphinx.build_main(['-c', 'path/to/conf',
'path/to/source/', 'path/to/out'])
So far so good.
Now my users want the app to show them their syntax mistakes. But the output (result in the example above) only gives me the exit code.
So, how do I get a list of warnings from the build process?
Perhaps I am being too ambitious, but since sphinx is a python tool, I was expecting to have a nice pythonic interface with the tool. For example, the output of sphinx.build_main could be a very rich object with warnings, line numbers...
On a related note, the argument to the method sphinx.build_main looks just like a wrapper to the command line interface.

sphinx.build_main() calls sphinx.cmdline.main(), which in turn creates a sphinx.application.Sphinx object. You could create such an object directly (instead of "making system calls within python"). Use something like this:
import os
from sphinx.application import Sphinx
# Main arguments
srcdir = "/path/to/source"
confdir = srcdir
builddir = os.path.join(srcdir, "_build")
doctreedir = os.path.join(builddir, "doctrees")
builder = "html"
# Write warning messages to a file (instead of stderr)
warning = open("/path/to/warnings.txt", "w")
# Create the Sphinx application object
app = Sphinx(srcdir, confdir, builddir, doctreedir, builder,
warning=warning)
# Run the build
app.build()

Assuming you used sphinx-quickstart to generate your initial Sphinx documentation set with a makefile, then you can use make to build docs, which in turn uses the Sphinx tool sphinx-build. You can pass the -w <file> option to sphinx-build to write warnings and errors to a file as well as stderr.
Note that options passed through the command line override any other options set in the makefile and conf.py.

Related

edit linux telegraf.conf with python

I am using telegraf as a measuring/monitoring tool in my tests. I need to edit telegraf configurations automatically; since all tests are being executed automatically.
Currently I am using re for configuring it; this is the process:
Read the whole file content.
Use regex to find and edit the required plugin/property.
Write the whole changed content to the file.
But I'm searching for a library, if exists, like ConfigParser or reconfigure to handle the configurations as an object not content.
I tried ConfigParser.ConfigParser, ConfigParser.RawConfigParser and ConfigParser.SafeConfigParser; all return:
ConfigParser.ParsingError: File contains parsing errors: /etc/telegraf/telegraf.conf
reconfigure library has specific configuration classes, each belongs to a special type of linux configs (e.g. FSTabConfig, ResolvConfig and some other types), but it doesn't contain a class for telegraf configs.
Does anyone have an option in mind?
EDIT 1:
I tried configobj library (as #KevinC suggested), but it loads nothing:
>>> import configobj
>>> c = configobj.ConfigObj('/home/zeinab/Desktop/config-modification/telegraf.conf', list_values=False)
>>> c
ConfigObj({})
Using list_values=True returns the same results.
You can use toml
Configuration file
[[inputs.ping]]
## Hosts to send ping packets to.
urls = ["example.org"]
method = "exec"
Usage
import toml
conf = (toml.load("/etc/telegraf/telegraf.conf"))
conf.get("inputs")
Output
{'ping': [{'urls': ['example.org'], 'method': 'exec'}]}
You can use configobj, but you have to specify "list_values"=False
c = configobj.ConfigObj('/etc/telegraf/telegraf.conf', list_values=False)

Accessing Robot Framework global variables from a prerun modifier

I am invoking Robot Framework on a folder with a command like following:
robot --name MyTestSuite --variablefile lib/global_variables.py --variable TARGET_TYPE:FOO --variable IMAGE_TYPE:BAR --prerunmodifier MyCustomModifier.py ./tests
MyCustomModifier.py contains a simple SuiteVisitor class, which includes/excludes tags and does a few other things based on some of the variable values set.
How do I access TARGET_TYPE and IMAGE_TYPE in that class? The method shown here does not work, because I want access to the variables before tests start executing, and therefore I get a RobotNotRunningError with message Cannot access execution context.
After finding this issue report, I tried to downgrade to version 2.9.1 but nothing changed.
None of public API's seem to provide this information but debugging the main code does provide an alternative way of obtaining it. It has to be said that this example code will work with version 3.0.2, but may not work in the future as these are internal functions subject to change. That said, I do think that the approach will remain.
As Robot Framework is an application, it obtains the command line arguments through it's main function: run_cli (when running from command line). This function is filled with the arguments from the system itself and can be obtained throughout every python script via:
import sys
cli_args = sys.argv[1:]
Robot Framework has a function that interprets the commandline argument list and make it into a more readable object:
from robot.run import RobotFramework
import sys
options, arguments = RobotFramework().parse_arguments(sys.argv[1:])
The argument variable is a list where all the variables from the command line are added. An example:
arguments[0] = IMAGE_TYPE:BAR
This should allow you to access the information you need.

Pythonic way to set module-wide settings from external file

Some background (not mandatory, but might be nice to know): I am writing a Python command-line module which is a wrapper around latexdiff. It basically replaces all \cite{ref1, ref2, ...} commands in LaTeX files with written-out and properly formatted references before passing the files to latexdiff, so that latexdiff will properly mark changes to references in the text (otherwise, it treats the whole \cite{...} command as a single "word"). All the code is currently in a single file which can be run with python -m latexdiff-cite, and I have not yet decided how to package or distribute it. To make the script useful for anybody else, the citation formatting needs to be configurable. I have implemented an optional command-line argument -c CONFIGFILE to allow the user to point to their own JSON config file (a default file resides in the module folder and is loaded if the argument is not used).
Current implementation: My single-file command-line Python module currently parses command-line arguments in if __name__ == '__main__', and loads the config file (specified by the user in -c CONFIGFILE) here before running the main function of the program. The config variable is thus available in the entire module and all is well. However, I'm considering publishing to PyPI by following this guide which seems to require me to put the command-line parsing in a main() function, which means the config variable will not be available to the other functions unless passed down as arguments to where it's needed. This "passing down by arguments" method seems a little cluttered to me.
Question: Is there a more pythonic way to set some configuration globals in a module or otherwise accomplish what I'm trying to? (I don't want to rely on 3rd party modules.) Am I perhaps completely off the tracks in some fundamental way?
One way to do it is to have the configurations defined in a class or a simple dict:
class Config(object):
setting1 = "default_value"
setting2 = "default_value"
#staticmethod
def load_config(json_file):
""" load settings from config file """
with open(json_file) as f:
config = json.load(f)
for k, v in config.iteritems():
setattr(Config, k, v)
Then your application can access the settings via this class: Config.setting1 ...

Examples of entry_point usage

I discoverd entry_points of setuptools:
http://pythonhosted.org/setuptools/setuptools.html#dynamic-discovery-of-services-and-plugins
quote: setuptools supports creating libraries that “plug in” to extensible applications and frameworks, by letting you register “entry points” in your project that can be imported by the application or framework.
But I have not seen a project using them.
Are there examples of projects which use them?
If not, why are they not used?
There are loads of examples. Any project that defines console scripts uses them, for example. A quick search on GitHub gives you plenty to browse through.
I'll focus on one specific example (one that is not on GitHub): Babel.
Babel uses both entry_points for both console scripts and to define extension points for translatable text extraction. See their setup.py source:
if have_setuptools:
extra_arguments = dict(
zip_safe = False,
test_suite = 'babel.tests.suite',
tests_require = ['pytz'],
entry_points = """
[console_scripts]
pybabel = babel.messages.frontend:main
[distutils.commands]
compile_catalog = babel.messages.frontend:compile_catalog
extract_messages = babel.messages.frontend:extract_messages
init_catalog = babel.messages.frontend:init_catalog
update_catalog = babel.messages.frontend:update_catalog
[distutils.setup_keywords]
message_extractors = babel.messages.frontend:check_message_extractors
[babel.checkers]
num_plurals = babel.messages.checkers:num_plurals
python_format = babel.messages.checkers:python_format
[babel.extractors]
ignore = babel.messages.extract:extract_nothing
python = babel.messages.extract:extract_python
javascript = babel.messages.extract:extract_javascript
""",
)
Tools like pip and zc.buildout use the console_scripts entry point to create commandline scripts (one called pybabel, running the main() callable in the babel.messages.frontend module).
The distutils.commands entry points defines additional commands you can use when running setup.py; these can be used in your own projects to invoke Babel command-line utilities right from your setup script.
Last, but not least, it registers its own checkers and extractors. The babel.extractors entry point is loaded by the babel.messages.extract.extract function, using the setuptools pkg_resources module, giving access to all installed Python projects that registered that entry point. The following code looks for a specific extractor in those entries:
try:
from pkg_resources import working_set
except ImportError:
pass
else:
for entry_point in working_set.iter_entry_points(GROUP_NAME,
method):
func = entry_point.load(require=True)
break
This lets any project register additional extractors; simply add an entry point in your setup.py and Babel can make use of it.
Sentry is a good example. Sentry's author even created a django package named Logan to convert standard django management commands to console scripts.

asciidoc fails when called via python subprocess

Here is my code:
#!/usr/bin/python
import subprocess
asciidoc_file_name = '/tmp/redoc_2013-06-25_12:52:19.txt'
asciidoc_call = ["asciidoc","-b docbook45",asciidoc_file_name]
print asciidoc_call
subprocess.call(asciidoc_call)
And here is the output:
labamba#lambada:~$ ./debug.py
['asciidoc', '-b docbook45', '/tmp/redoc_2013-06-25_12:52:19.txt']
asciidoc: FAILED: missing backend conf file: docbook45.conf
labamba#lambada:~$ asciidoc -b docbook45 /tmp/redoc_2013-06-25_12\:52\:19.txt
labamba#lambada:~$ file /tmp/redoc_2013-06-25_12\:52\:19.xml
/tmp/redoc_2013-06-25_12:52:19.xml: XML document text
labamba#lambada:~$ file /etc/asciidoc/docbook45.conf
/etc/asciidoc/docbook45.conf: HTML document, ASCII text, with very long lines
When called via python subprocess, asciidoc complains about a missing config file. When called on the command line, everything is fine, and the config file is there. Can anyone make sense out of this? I'm lost.
Try this:
asciidoc_call = ["asciidoc","-b", "docbook45", asciidoc_file_name]
the other call would call ascidoc with "-b docbook45" as one single option, which won't work.
The question is old... Anyway, the asciidoc is implemented in Python and it also includes the asciidocapi.py that can be used as a module from your Python program. The module docstring says:
asciidocapi - AsciiDoc API wrapper class.
The AsciiDocAPI class provides an API for executing asciidoc. Minimal example
compiles `mydoc.txt` to `mydoc.html`:
import asciidocapi
asciidoc = asciidocapi.AsciiDocAPI()
asciidoc.execute('mydoc.txt')
- Full documentation in asciidocapi.txt.
- See the doctests below for more examples.
To simplify, it implements the AsciiDocAPI class that--when initialized--searches for the asciidoc script and imports it behind the scene as a module. This way, you can use it more naturally in Python, and you can avoid using the subprocess.call().

Categories

Resources