asciidoc fails when called via python subprocess - python

Here is my code:
#!/usr/bin/python
import subprocess
asciidoc_file_name = '/tmp/redoc_2013-06-25_12:52:19.txt'
asciidoc_call = ["asciidoc","-b docbook45",asciidoc_file_name]
print asciidoc_call
subprocess.call(asciidoc_call)
And here is the output:
labamba#lambada:~$ ./debug.py
['asciidoc', '-b docbook45', '/tmp/redoc_2013-06-25_12:52:19.txt']
asciidoc: FAILED: missing backend conf file: docbook45.conf
labamba#lambada:~$ asciidoc -b docbook45 /tmp/redoc_2013-06-25_12\:52\:19.txt
labamba#lambada:~$ file /tmp/redoc_2013-06-25_12\:52\:19.xml
/tmp/redoc_2013-06-25_12:52:19.xml: XML document text
labamba#lambada:~$ file /etc/asciidoc/docbook45.conf
/etc/asciidoc/docbook45.conf: HTML document, ASCII text, with very long lines
When called via python subprocess, asciidoc complains about a missing config file. When called on the command line, everything is fine, and the config file is there. Can anyone make sense out of this? I'm lost.

Try this:
asciidoc_call = ["asciidoc","-b", "docbook45", asciidoc_file_name]
the other call would call ascidoc with "-b docbook45" as one single option, which won't work.

The question is old... Anyway, the asciidoc is implemented in Python and it also includes the asciidocapi.py that can be used as a module from your Python program. The module docstring says:
asciidocapi - AsciiDoc API wrapper class.
The AsciiDocAPI class provides an API for executing asciidoc. Minimal example
compiles `mydoc.txt` to `mydoc.html`:
import asciidocapi
asciidoc = asciidocapi.AsciiDocAPI()
asciidoc.execute('mydoc.txt')
- Full documentation in asciidocapi.txt.
- See the doctests below for more examples.
To simplify, it implements the AsciiDocAPI class that--when initialized--searches for the asciidoc script and imports it behind the scene as a module. This way, you can use it more naturally in Python, and you can avoid using the subprocess.call().

Related

Using Jenkins variables/parameters in Python Script with os.path.join

I'm trying to learn how to use variables from Jenkins in Python scripts. I've already learned that I need to call the variables, but I'm not sure how to implement them in the case of using os.path.join().
I'm not a developer; I'm a technical writer. This code was written by somebody else. I'm just trying to adapt the Jenkins scripts so they are parameterized so we don't have to modify the Python scripts for every release.
I'm using inline Jenkins python scripts inside a Jenkins job. The Jenkins string parameters are "BranchID" and "BranchIDShort". I've looked through many questions that talk about how you have to establish the variables in the Python script, but with the case of os.path.join(),I'm not sure what to do.
Here is the original code. I added the part where we establish the variables from the Jenkins parameters, but I don't know how to use them in the os.path.join() function.
# Delete previous builds.
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc192CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc192CS", "Output"))
I expect output like: c:\Doc192CS\Output
I am afraid that if I do the following code:
if os.path.exists(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output"))
I'll get: c:\Doc\192\CS\Output.
Is there a way to use the BranchIDshort variable in this context to get the output c:\Doc192CS\Output?
User #Adonis gave the correct solution as a comment. Here is what he said:
Indeed you're right. What you would want to do is rather:
os.path.exists(os.path.join("C:\\","Doc{}CS".format(BranchIDshort),"Output"))
(in short use a format string for the 2nd argument)
So the complete corrected code is:
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output")):
shutil.rmtree(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output"))
Thank you, #Adonis!

How to get a list of warnings from sphinx compilation

I am developing a sphinx based collaborative writing tool. Users access the web application (developed in python/Flask) to write a book in sphinx and compile it to pdf.
I have learned that in order to compile a sphinx documentation from within python I should use
import sphinx
result = sphinx.build_main(['-c', 'path/to/conf',
'path/to/source/', 'path/to/out'])
So far so good.
Now my users want the app to show them their syntax mistakes. But the output (result in the example above) only gives me the exit code.
So, how do I get a list of warnings from the build process?
Perhaps I am being too ambitious, but since sphinx is a python tool, I was expecting to have a nice pythonic interface with the tool. For example, the output of sphinx.build_main could be a very rich object with warnings, line numbers...
On a related note, the argument to the method sphinx.build_main looks just like a wrapper to the command line interface.
sphinx.build_main() calls sphinx.cmdline.main(), which in turn creates a sphinx.application.Sphinx object. You could create such an object directly (instead of "making system calls within python"). Use something like this:
import os
from sphinx.application import Sphinx
# Main arguments
srcdir = "/path/to/source"
confdir = srcdir
builddir = os.path.join(srcdir, "_build")
doctreedir = os.path.join(builddir, "doctrees")
builder = "html"
# Write warning messages to a file (instead of stderr)
warning = open("/path/to/warnings.txt", "w")
# Create the Sphinx application object
app = Sphinx(srcdir, confdir, builddir, doctreedir, builder,
warning=warning)
# Run the build
app.build()
Assuming you used sphinx-quickstart to generate your initial Sphinx documentation set with a makefile, then you can use make to build docs, which in turn uses the Sphinx tool sphinx-build. You can pass the -w <file> option to sphinx-build to write warnings and errors to a file as well as stderr.
Note that options passed through the command line override any other options set in the makefile and conf.py.

Pythonic way to set module-wide settings from external file

Some background (not mandatory, but might be nice to know): I am writing a Python command-line module which is a wrapper around latexdiff. It basically replaces all \cite{ref1, ref2, ...} commands in LaTeX files with written-out and properly formatted references before passing the files to latexdiff, so that latexdiff will properly mark changes to references in the text (otherwise, it treats the whole \cite{...} command as a single "word"). All the code is currently in a single file which can be run with python -m latexdiff-cite, and I have not yet decided how to package or distribute it. To make the script useful for anybody else, the citation formatting needs to be configurable. I have implemented an optional command-line argument -c CONFIGFILE to allow the user to point to their own JSON config file (a default file resides in the module folder and is loaded if the argument is not used).
Current implementation: My single-file command-line Python module currently parses command-line arguments in if __name__ == '__main__', and loads the config file (specified by the user in -c CONFIGFILE) here before running the main function of the program. The config variable is thus available in the entire module and all is well. However, I'm considering publishing to PyPI by following this guide which seems to require me to put the command-line parsing in a main() function, which means the config variable will not be available to the other functions unless passed down as arguments to where it's needed. This "passing down by arguments" method seems a little cluttered to me.
Question: Is there a more pythonic way to set some configuration globals in a module or otherwise accomplish what I'm trying to? (I don't want to rely on 3rd party modules.) Am I perhaps completely off the tracks in some fundamental way?
One way to do it is to have the configurations defined in a class or a simple dict:
class Config(object):
setting1 = "default_value"
setting2 = "default_value"
#staticmethod
def load_config(json_file):
""" load settings from config file """
with open(json_file) as f:
config = json.load(f)
for k, v in config.iteritems():
setattr(Config, k, v)
Then your application can access the settings via this class: Config.setting1 ...

Exception while using Sphinx to document script with command line arguments

Is there any way I can use Sphinx to document the command line inputs of a python script? I am able to document inputs of functions or methods but I don't know how to document inputs of scripts. I have been trying to follow the same syntax I use for functions by adding in the source file the line .. automodule:: scriptLDOnServer where scriptLDOnServer is my python script (which corresponds to my main).
The problem is that I get an error like this:
__import__(self.modname)
File "/home/ubuntu/SVNBioinfo/trunk/Code/LD/scriptLDOnServer.py", line 10, in <module>
genotype_filename=sys.argv[7];
IndexError: list index out of range
It seems that Sphinx is trying to get the command line inputs but in my source file there are no inputs so the import fails. Is there a way to solve this? Should I use another command in the source for a script instead of a module?
Sorry for not being very clear but it is difficult to explain the problem.
It is failing while you are importing the file because you are trying to access sys.argv[7] at import time. This is not something you should be doing. Such code which can possibly fail should be either in an if __name__ == '__main__': block or in a function, so that it won't be executed when the code is imported.
This holds generally. You should never* write code which will fail to import or which has any side-effects of being imported.
* Conditions apply. But you'd better have pretty good justification before breaking this rule.

Getting python -m module to work for a module implemented in C

I have a pure C module for Python and I'd like to be able to invoke it using the python -m modulename approach. This works fine with modules implemented in Python and one obvious workaround is to add an extra file for that purpose. However I really want to keep things to my one single distributed binary and not add a second file just for this workaround.
I don't care how hacky the solution is.
If you do try to use a C module with -m then you get an error message No code object available for <modulename>.
-m implementation is in runpy._run_module_as_main . Its essence is:
mod_name, loader, code, fname = _get_module_details(mod_name)
<...>
exec code in run_globals
A compiled module has no "code object" accociated with it so the 1st statement fails with ImportError("No code object available for <module>"). You need to extend runpy - specifically, _get_module_details - to make it work for a compiled module. I suggest returning a code object constructed from the aforementioned "import mod; mod.main()":
(python 2.6.1)
code = loader.get_code(mod_name)
if code is None:
+ if loader.etc[2]==imp.C_EXTENSION:
+ code=compile("import %(mod)s; %(mod)s.main()"%{'mod':mod_name},"<extension loader wrapper>","exec")
+ else:
+ raise ImportError("No code object available for %s" % mod_name)
- raise ImportError("No code object available for %s" % mod_name)
filename = _get_filename(loader, mod_name)
(Update: fixed an error in format string)
Now...
C:\Documents and Settings\Пользователь>python -m pythoncom
C:\Documents and Settings\Пользователь>
This still won't work for builtin modules. Again, you'll need to invent some notion of "main code unit" for them.
Update:
I've looked through the internals called from _get_module_details and can say with confidence that they don't even attempt to retrieve a code object from a module of type other than imp.PY_SOURCE, imp.PY_COMPILED or imp.PKG_DIRECTORY . So you have to patch this machinery this way or another for -m to work. Python fails before retrieving anything from your module (it doesn't even check if the dll is a valid module) so you can't do anything by building it in a special way.
Does your requirement of single distributed binary allow for the use of an egg? If so, you could package your module with a __main__.py with your calling code and the usual __init__.py...
If you're really adamant, maybe you could extend pkgutil.ImpLoader.get_code to return something for C modules (e.g., maybe a special __code__ function). To do that, I think you're going to have to actually change it in the Python source. Even then, pkgutil uses exec to execute the code block, so it would have to be Python code anyway.
TL;DR: I think you're euchred. While Python modules have code at the global level that runs at import time, C modules don't; they're mostly just a dict namespace. Thus, running a C module doesn't really make sense from a conceptual standpoint. You need some real Python code to direct the action.
I think that you need to start by making a separate file in Python and getting the -m option to work. Then, turn that Python file into a code object and incorporate it into your binary in such a way that it continues to work.
Look up setuptools in PyPi, download the .egg and take a look at the file. You will see that the first few bytes contain a Python script and these are followed by a .ZIP file bytestream. Something similar may work for you.
There's a brand new thing that may solve your problems easily. I've just learnt about it and it looks preety decent to me: http://code.google.com/p/pts-mini-gpl/wiki/StaticPython

Categories

Resources