After much struggle with awful defaults in Sphinx, I finally found a way to display inherited methods in subclass documentation. Unfortunately, this option is global...
autodoc_default_options = {
...
'inherited-members': True,
}
Is there any way to annotate any given class to prevent inherited methods and fields from showing in that documentation?
If there is no way to base it on inheritance, is there any way to simply list all the methods I don't want to be documented for a given class in its docstring?
I'm OK... well, I'd cry a little, but I'd live if I had to list the methods that need to be documented rather than blacklising the ones I don't want.
I know I can put :meta private: on a method definition to circumvent its inclusion in documentation (sort of, not really, but let's pretend it works), but in the case of inherited methods there's nowhere I can attach the docstring to.
Note that any "solution" that involves writing .. automodule:: section by hand is not a solution -- those must be generated.
Well, I figured how to solve half of the problem. Brace yourself for a lot of duct tape...
So, here's an example of how you can disable inherited fields in everything that extends directly or indirectly Exception class:
def is_error(obj):
return issubclass(obj, Exception)
conditionally_ignored = {
'__reduce__': is_error,
'__init__': is_error,
'with_traceback': is_error,
'args': is_error,
}
def skip_member_handler(app, objtype, membername, member, skip, options):
ignore_checker = conditionally_ignored.get(membername)
if ignore_checker:
frame = sys._getframe()
while frame.f_code.co_name != 'filter_members':
frame = frame.f_back
suspect = frame.f_locals['self'].object
return not ignore_checker(suspect)
return skip
def setup(app):
app.connect('autodoc-skip-member', skip_member_handler)
Put the above in your config.py. This implies that you are using autodoc to generate documentation.
In the end, I didn't go for controlling this behavior from the docstring of the class being documented. This is just too much work, and there are too many bugs in autodoc, Sphinx builders, the generated output and so on. Just not worth it.
But, the general idea would be to similarly add an event handler for when the source is read, extract information from docstrings, replace docstrings by patched docstrings w/o the necessary information, keep the extracted information somewhere until the skip handler is called, and then implement skipping. But, like I said, there are simply too many things broken along this line.
Another approach would be to emit XML, patch it with XSLT and feed it back to Docutils, but at the time of writing XML generator is broken (it adds namespaced attributes to XML, while it doesn't declare namespaces...) Old classic.
Yet another approach would've been handling an event, when, say the document is generated and references are resolved. Unfortunately, at that point, you will be missing information such as types of things you are working with, the original docstrings etc. You'll be essentially patching HTML, but with very convoluted structure.
Related
We use Enthought Traits to declare some python classes that are use to create database schema and the UI to add records. So we have code to iterate through the class traits and do some actions. One common issue is that the order of declaration is usually significant or very helpful to understand the schema and the UI but this order is lost in class_traits (because it's a python dict).
Is there a way to automatically keep the declaration order in class_traits? Maybe by overriding some methods in HasTraits?
We use python 2 right now (we are in the process of moving to python 3 but it will take time).
EDIT: Before posting, I found this question which suggested a trick similar to what Robert Kern suggested but without snippet. I was expecting that Traits would provide some help there. Robert's answer is useful anyway.
Not easily in Python 2. Python 2 simply doesn't preserve that information.
If you are running with .py sources available, then it is possible to reparse the class definition from that file and reconstruct the order that way. If you aren't, then I suppose you could try to parse the bytecode, but you're on your own there.
import ast
import inspect
def ordered_class_traits(cls):
source = inspect.getsource(cls)
mod_ast = ast.parse(source)
class_trait_names = cls.class_trait_names()
class_ast = mod_ast.body[0]
for node in class_ast.body:
# This probably doesn't capture every possible trait declaration
# out there, but it does the job for most of them.
if isinstance(node, ast.Assign):
target = node.targets[0]
if isinstance(target, ast.Name):
if target.id in class_trait_names:
yield target.id
Can you explain the concept stubbing out functions or classes taken from this article?
class Loaf:
pass
This class doesn't define any methods or attributes, but syntactically, there needs to be something in the definition, so you use pass. This is a Python reserved word that just means “move along, nothing to see here”. It's a statement that does nothing, and it's a good placeholder when you're stubbing out functions or classes.`
thank you
stubbing out functions or classes
This refers to writing classes or functions but not yet implementing them. For example, maybe I create a class:
class Foo(object):
def bar(self):
pass
def tank(self):
pass
I've stubbed out the functions because I haven't yet implemented them. However, I don't think this is a great plan. Instead, you should do:
class Foo(object):
def bar(self):
raise NotImplementedError
def tank(self):
raise NotImplementedError
That way if you accidentally call the method before it is implemented, you'll get an error then nothing happening.
A 'stub' is a placeholder class or function that doesn't do anything yet, but needs to be there so that the class or function in question is defined. The idea is that you can already use certain aspects of it (such as put it in a collection or pass it as a callback), even though you haven't written the implementation yet.
Stubbing is a useful technique in a number of scenarios, including:
Team development: Often, the lead programmer will provide class skeletons filled with method stubs and a comment describing what the method should do, leaving the actual implementation to other team members.
Iterative development: Stubbing allows for starting out with partial implementations; the code won't be complete yet, but it still compiles. Details are filled in over the course of later iterations.
Demonstrational purposes: If the content of a method or class isn't interesting for the purpose of the demonstration, it is often left out, leaving only stubs.
Note that you can stub functions like this:
def get_name(self) -> str : ...
def get_age(self) -> int : ...
(yes, this is valid python code !)
It can be useful to stub functions that are added dynamically to an object by a third party library and you want have typing hints.
Happens to me... once :-)
Ellipsis ... is preferable to pass for stubbing.
pass means "do nothing", whereas ... means "something should go here" - it's a placeholder for future code. The effect is the same but the meaning is different.
Stubbing is a technique in software development. After you have planned a module or class, for example by drawing it's UML diagram, you begin implementing it.
As you may have to implement a lot of methods and classes, you begin with stubs. This simply means that you only write the definition of a function down and leave the actual code for later. The advantage is that you won't forget methods and you can continue to think about your design while seeing it in code.
The reason for pass is that Python is indentation dependent and expects one or more indented statement after a colon (such as after class or function).
When you have no statements (as in the case of a stubbed out function or class), there still needs to be at least one indented statement, so you can use the special pass statement as a placeholder. You could just as easily put something with no effect like:
class Loaf:
True
and that is also fine (but less clear than using pass in my opinion).
Customizing pprint.PrettyPrinter
The documentation for the pprint module mentions that the method PrettyPrinter.format is intended to make it possible to customize formatting.
I gather that it's possible to override this method in a subclass, but this doesn't seem to provide a way to have the base class methods apply line wrapping and indentation.
Am I missing something here?
Is there a better way to do this (e.g. another module)?
Alternatives?
I've checked out the pretty module, which looks interesting, but doesn't seem to provide a way to customize formatting of classes from other modules without modifying those modules.
I think what I'm looking for is something that would allow me to provide a mapping of types (or maybe functions) that identify types to routines that process a node. The routines that process a node would take a node and return the string representation it, along with a list of child nodes. And so on.
Why I’m looking into pretty-printing
My end goal is to compactly print custom-formatted sections of a DocBook-formatted xml.etree.ElementTree.
(I was surprised to not find more Python support for DocBook. Maybe I missed something there.)
I built some basic functionality into a client called xmlearn that uses lxml. For example, to dump a Docbook file, you could:
xmlearn -i docbook_file.xml dump -f docbook -r book
It's pretty half-ass, but it got me the info I was looking for.
xmlearn has other features too, like the ability to build a graph image and do dumps showing the relationships between tags in an XML document. These are pretty much totally unrelated to this question.
You can also perform a dump to an arbitrary depth, or specify an XPath as a set of starting points. The XPath stuff sort of obsoleted the docbook-specific format, so that isn't really well-developed.
This still isn't really an answer for the question. I'm still hoping that there's a readily customizable pretty printer out there somewhere.
My solution was to replace pprint.PrettyPrinter with a simple wrapper that formats any floats it finds before calling the original printer.
from __future__ import division
import pprint
if not hasattr(pprint,'old_printer'):
pprint.old_printer=pprint.PrettyPrinter
class MyPrettyPrinter(pprint.old_printer):
def _format(self,obj,*args,**kwargs):
if isinstance(obj,float):
obj=round(obj,4)
return pprint.old_printer._format(self,obj,*args,**kwargs)
pprint.PrettyPrinter=MyPrettyPrinter
def pp(obj):
pprint.pprint(obj)
if __name__=='__main__':
x=[1,2,4,6,457,3,8,3,4]
x=[_/17 for _ in x]
pp(x)
This question may be a duplicate of:
Any way to properly pretty-print ordered dictionaries in Python?
Using pprint.PrettyPrinter
I looked through the source of pprint. It seems to suggest that, in order to enhance pprint(), you’d need to:
subclass PrettyPrinter
override _format()
test for issubclass(),
and (if it's not your class), pass back to _format()
Alternative
I think a better approach would be just to have your own pprint(), which defers to pprint.pformat when it doesn't know what's up.
For example:
'''Extending pprint'''
from pprint import pformat
class CrazyClass: pass
def prettyformat(obj):
if isinstance(obj, CrazyClass):
return "^CrazyFoSho^"
else:
return pformat(obj)
def prettyp(obj):
print(prettyformat(obj))
# test
prettyp([1]*100)
prettyp(CrazyClass())
The big upside here is that you don't depend on pprint internals. It’s explicit and concise.
The downside is that you’ll have to take care of indentation manually.
If you would like to modify the default pretty printer without subclassing, you can use the internal _dispatch table on the pprint.PrettyPrinter class. You can see how examples of how dispatching is added for internal types like dictionaries and lists in the source.
Here is how I added a custom pretty printer for MatchPy's Operation type:
import pprint
import matchpy
def _pprint_operation(self, object, stream, indent, allowance, context, level):
"""
Modified from pprint dict https://github.com/python/cpython/blob/3.7/Lib/pprint.py#L194
"""
operands = object.operands
if not operands:
stream.write(repr(object))
return
cls = object.__class__
stream.write(cls.__name__ + "(")
self._format_items(
operands, stream, indent + len(cls.__name__), allowance + 1, context, level
)
stream.write(")")
pprint.PrettyPrinter._dispatch[matchpy.Operation.__repr__] = _pprint_operation
Now if I use pprint.pprint on any object that has the same __repr__ as matchpy.Operation, it will use this method to pretty print it. This works on subclasses as well, as long as they don't override the __repr__, which makes some sense! If you have the same __repr__ you have the same pretty printing behavior.
Here is an example of the pretty printing some MatchPy operations now:
ReshapeVector(Vector(Scalar('1')),
Vector(Index(Vector(Scalar('0')),
If(Scalar('True'),
Scalar("ReshapeVector(Vector(Scalar('2'), Scalar('2')), Iota(Scalar('10')))"),
Scalar("ReshapeVector(Vector(Scalar('2'), Scalar('2')), Ravel(Iota(Scalar('10'))))")))))
Consider using the pretty module:
http://pypi.python.org/pypi/pretty/0.1
After just been coding for about 6-9 months. I probably changed my coding style a number of times after reading some code or read best practices. But one thing I haven't yet come a cross is a good why to populate the template_dict.
As of now I pass the template_dict across a number of methods (that changes/modifies it) and returns is. The result is that every methods takes template_dict as first argument and the returns it and this in my eyes doesn't seems to be the best solution.
An idea is to have a method that handles all the changes. But I'm curios if there's a best practice for this? Or is it "do what you feel like"-type of thing?
The 2 things I think is pretty ugly is to send as an argument and return it in all methods. And the just the var name is written xxx number of times in the code :)
..fredrik
EDIT:
To demonstrate what I mean with template_dict (I thought that was a general term, I got it from the google implementation of django's template methods).
I have an dict I pass to the template via the render.template method:
template.render(path, template_dict) #from google.appengine.ext.webapp import template
This template_dict I need to manipulate in order to send data/dicts/lists to the view (html-file). If I'm not mistaken.
So with this in mind, my code usually ends up looking some this like this:
## Main.py file to handle the request and imports classes.
from models import data
from util import foo
class MainHandler(webapp.RequestHandler):
template_dict = { 'lang' : 'en' }
## reads all current keys and returns dict w/ new keys, if needed
template_dict = data.getData(template_dict)
if 'unsorted_list' in template_dict:
template_dict = util.foo(template_dict)
## and so on....
path = os.path.join(os.path.dirname(__file__), 'templates', file)
self.response.out.write(template.render(path, template_dict))
In most of my applications the many returns and sets doesn't appear in the main.py but rather in other classes and methods.
But you should do the general idea.
If the functions in question are all methods of some object foo, then each of them can refer to the context they're building up (I imagine that's what you mean by "template dict"?) as self.ctx or the like (attribute name's somewhat arbitrary, the key point is that you can keep the context as an attribute of foo, typically initialized to empty in foo's __init__, and incrementally build it up via foo's methods; in the end, foo.ctx is ready for you).
This doesn't work in a more general case where the functions are all over the place rather than being methods of a single object. In that case ctx does need to be passed to each function (though the function can typically alter it in-place and doesn't need to return it).
I am considering the use of Quantities to define a number together with its unit. This value most likely will have to be stored on the disk. As you are probably aware, pickling has one major issue: if you relocate the module around, unpickling will not be able to resolve the class, and you will not be able to unpickle the information. There are workarounds for this behavior, but they are, indeed, workarounds.
A solution I fantasized for this issue would be to create a string encoding uniquely a given unit. Once you obtain this encoding from the disk, you pass it to a factory method in the Quantities module, which decodes it to a proper unit instance. The advantage is that even if you relocate the module around, everything will still work, as long as you pass the magic string token to the factory method.
Is this a known concept?
Looks like an application of Wheeler's First Principle, "all problems in computer science can be solved by another level of indirection" (the Second Principle adds "but that will usually create another problem";-). Essentially what you need to do is an indirection to identify the type -- entity-within-type will be fine with pickling-like approaches (you can study the sources of pickle.py and copy_reg.py for all the fine details of the latter).
Specifically, I believe that what you want to do is subclass pickle.Pickler and override the save_inst method. Where the current version says:
if self.bin:
save(cls)
for arg in args:
save(arg)
write(OBJ)
else:
for arg in args:
save(arg)
write(INST + cls.__module__ + '\n' + cls.__name__ + '\n')
you want to write something different than just the class's module and name -- some kind of unique identifier (made up of two string) for the class, probably held in your own registry or registries; and similarly for the save_global method.
It's even easier for your subclass of Unpickler, because the _instantiate part is already factored out in its own method: you only need to override find_class, which is:
def find_class(self, module, name):
# Subclasses may override this
__import__(module)
mod = sys.modules[module]
klass = getattr(mod, name)
return klass
it must take two strings and return a class object; you can do that through your registries, again.
Like always when registries are involved, you need to think about how to ensure you register all objects (classes) of interest, etc, etc. One popular strategy here is to leave pickling alone, but ensure that all moves of classes, renames of modules, etc, are recorded somewhere permanent; this way, just the subclassed unpickler can do all the work, and it can most conveniently do it all in the overridden find_class -- bypassing all issues of registration. I gather you consider this a "workaround" but to me it seems just an extremely simple, powerful and convenient implementation of the "one more level of indirection" concept, which avoids the "one more problem" issue;-).