Can I set a custom "system root" path with pylint? - python

I have a build system that makes a complete system image for an embedded platform. It includes a significant amount of Python code. Currently, I'm linting this using the target platform's Python and Pylint running under qemu but it is slow. Really slow.
Is there a way to run the build platform's linter but using all the Python files from the target tree?

I don't know of any builtin way to do that but you could:
Use the init_hook option where you can modify the sys.path to point to your embedded target (or anything you can do in python).
Rely on your tests (autamated or manual) for errors, and disable known false positives one by one whether by using pylint: disable=no-member for each or directly in the configuration with:
[TYPECHECK]
# List of decorators that produce context managers, such as
# contextlib.contextmanager. Add to this list to register other decorators that
# produce valid context managers.
contextmanager-decorators=contextlib.contextmanager
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E1101 when accessed. Python regular
# expressions are accepted.
generated-members=REQUEST,
acl_users,
aq_parent
# List of class names for which member attributes should not be checked (useful
# for classes with dynamically set attributes). This supports the use of
# qualified names.
ignored-classes=SQLObject,optparse.Values,thread._local,_thread._local
# List of module names for which member attributes should not be checked
# (useful for modules/projects where namespaces are manipulated during runtime
# and thus existing member attributes cannot be deduced by static analysis). It
# supports qualified module names, as well as Unix pattern matching.
ignored-modules=
# List of decorators that change the signature of a decorated function.
signature-mutators=

Related

How can I save a dynamically generated module and reimport them from file?

I have an application that dynamically generates a lot of Python modules with class factories to eliminate a lot of redundant boilerplate that makes the code hard to debug across similar implementations and it works well except that the dynamic generation of the classes across the modules (hundreds of them) takes more time to load than simply importing from a file. So I would like to find a way to save the modules to a file after generation (unless reset) then load from those files to cut down on bootstrap time for the platform.
Does anyone know how I can save/export auto-generated Python modules to a file for re-import later. I already know that pickling and exporting as a JSON object won't work because they make use of thread locks and other dynamic state variables and the classes must be defined before they can be pickled. I need to save the actual class definitions, not instances. The classes are defined with the type() function.
If you have ideas of knowledge on how to do this I would really appreciate your input.
You’re basically asking how to write a compiler whose input is a module object and whose output is a .pyc file. (One plausible strategy is of course to generate a .py and then byte-compile that in the usual fashion; the following could even be adapted to do so.) It’s fairly easy to do this for simple cases: the .pyc format is very simple (but note the comments there), and the marshal module does all of the heavy lifting for it. One point of warning that might be obvious: if you’ve already evaluated, say, os.getcwd() when you generate the code, that’s not at all the same as evaluating it when loading it in a new process.
The “only” other task is constructing the code objects for the module and each class: this requires concatenating a large number of boring values from the dis module, and will fail if any object encountered is non-trivial. These might be global/static variables/constants or default argument values: if you can alter your generator to produce modules directly, you can probably wrap all of these (along with anything else you want to defer) in function calls by compiling something like
my_global=(lambda: open(os.devnull,'w'))()
so that you actually emit the function and then a call to it. If you can’t so alter it, you’ll have to have rules to recognize values that need to be constructed in this fashion so that you can replace them with such calls.
Another detail that may be important is closures: if your generator uses local functions/classes, you’ll need to create the cell objects, perhaps via “fake” closures of your own:
def cell(x): return (lambda: x).__closure__[0]

How to define constant where scope is limited locally within module

How will I convey to other developer that particular constant is designed to be used locally within module ?
Consider below example from MyScript.py
PATH='Some configurable path'
How will I define PATH constant where scope is limited locally within particular module. Does it has to be prefixed with double underscore ?
I just want to understand how will I convey this to other developer that particular constant is designed to be used locally
Then prefix it with a single underscore (=> _PATH=...). This is the convention to specify this name is not part of the public API (works for every kind of name - module-level, class or instance attributes etc).
This won't technically prevent anyone from using it (just like ALL_UPPER won't technically make it a constant) but as long as you respect this naming rule the intent is clear for every pythonista and anyone messing with it is on it's own.

Python library module implementation

I am trying to write - and understand - some python code , and I have been struggling to realize how python libraries are imported. Let me describe my situation.
I am trying to mock a raspberry-pi-only python library (RPi.GPIO) in order to run some unittests in my (x86) laptop. In order to accomplish that, I thought I should just define the same functions, variables as the GPIO class, and have all the functions emtpy (just pass). So I had a look at the RPi.GPIO module.
Although I thought I would find the actual implementation of the GPIO class methods there, I actually saw that their body was empty. For example:
def add_event_detect(*args, **kwargs): # real signature unknown
"""
Enable edge detection events for a particular GPIO channel.
channel - either board pin number or BCM number depending on which mode is set.
edge - RISING, FALLING or BOTH
[callback] - A callback function for the event (optional)
[bouncetime] - Switch bounce timeout in ms for callback
"""
pass
So the question is, where is the actual implementation of this functions and what is the point of this empty body? (just the pass keyword and the documentation) How and by whom is this method overriden and gets the desired functionality?
The actual implementation of add_event_detect is in native C code, which you can find in your local virtualenv folder (or, as #Jean Jung indicates in the comments, this online implementation of RPi.GPIO.
Python modules can be written entirely in Python, but extensions are often written in C as described in the Python docs.
That stub implementation you see (whose implementation is just pass) is generated based on the native implementation. I suspect you are using PyCharm, which generates these automatically.
It should be a wrapper for a C function.
And if you want to override __import__ as Zizouz212 mentioned, use import hooks instead.
Here is a PEP describing import hooks:
https://www.python.org/dev/peps/pep-0302/

Documenting a non-existing member with Doxygen

I'm trying to document a python class using Doxygen. The class exposes a set of properties over d-bus, but these have no corresponding public getters/setters in the python class. Instead, they are implemented through a d-bus properties interface (Set/Get/GetAll/Introspect).
What I want to do is to be able to document these properties using something like this:
## #property package::Class::Name description
The whole package::Class works (the same method finds functions, so it finds the right class).
When running doxygen I get the following error:
warning: documented function ``package::Class::Name' was not declared or defined.
I can live with a warning, but unfortunately the property fails to appear in the documentation generated for the class, so it is not only a warning, but it is silenced as well.
So, my question is, how, if I can, do I make the non-existing property member appear in the generated docs?
Define the attribute inside an if 0: block:
## #class X
## #brief this is useless
class X:
if 0:
## #brief whatevs is a property that doesn't exist in spacetime
##
## It is designed to make bunny cry.
whatevs = property
This will cause it to exist in the documentation (tested with doxygen 1.8.1.2-1 on debian-squeeze). The attribute will never be made to exist at runtime, and in fact it looks like the python bytecode optimizer eliminates if statement and its body altogether.
I looked into something similar previously and couldn't find a direct way to coax Doxygen into documenting an undefined member. There are two basic kludges you can use here:
1.) generate a dummy object (or dummy members) for doxygen to inventory which don't actually exist in the live code.
2.) If the adjustments you need are fairly predictable and regular you could write an INPUT_FILTER for doxygen which takes your files and converts them before parsing. There are some issues with this method--mostly that if you plan on including the code in the documentation and the filter has to add/remove lines from the file, the line numbers it indicates will be off, and any code windows shown with the documentation will be off by that number of lines. You can also check the option to filter the displayed sources to adjust for this, but depending on who the consumer of your documentation is, it may be confusing for the copy in Doxygen not to perfectly match what's in the real source.
In our case we use a python script which Doxygen runs from the command-line with the file path as the arg. We read the file indicated and write what we want Doxygen to interpret instead to stdout. If you need the source copies displayed in doxygen to be filtered as well you can set FILTER_SOURCE_FILES to YES.

What is the underscore prefix for python file name?

In cherryPy for example, there are files like:
__init__.py
_cptools.py
How are they different? What does this mean?
__...__ means reserved Python name (both in filenames and in other names). You shouldn't invent your own names using the double-underscore notation; and if you use existing, they have special functionality.
In this particular example, __init__.py defines the 'main' unit for a package; it also causes Python to treat the specific directory as a package. It is the unit that will be used when you call import cherryPy (and cherryPy is a directory). This is briefly explained in the Modules tutorial.
Another example is the __eq__ method which provides equality comparison for a class. You are allowed to call those methods directly (and you use them implicitly when you use the == operator, for example); however, newer Python versions may define more such methods and thus you shouldn't invent your own __-names because they might then collide. You can find quite a detailed list of such methods in Data model docs.
_... is often used as 'internal' name. For example, modules starting with _ shouldn't be used directly; similarly, methods with _ are supposedly-private and so on. It's just a convention but you should respect it.
These, and other, naming conventions are described in detail in Style Guide for Python Code - Descriptive: Naming Styles
Briefly:
__double_leading_and_trailing_underscore__: "magic" objects or attributes that live in user-controlled namespaces. E.g.__init__, __import__ or __file__. Never invent such names; only use them as documented.
_single_leading_underscore: weak "internal use" indicator. E.g. from M import * does not import objects whose name starts with an underscore.
__init__.py is a special file that, when existing in a folder turns that folder into module. Upon importing the module, __init__.py gets executed. The other one is just a naming convention but I would guess this would say that you shouldn't import that file directly.
Take a look here: 6.4. Packages for an explanation of how to create modules.
General rule: If anything in Python is namend __anything__ then it is something special and you should read about it before using it (e.g. magic functions).
The current chosen answer already gave good explanation on the double-underscore notation for __init__.py.
And I believe there is no real need for _cptools.py notation in a filename. It is presumably an unnecessary extended usage of applying the "single leading underscore" rule from the Style Guide for Python Code - Descriptive: Naming Styles:
_single_leading_underscore: weak "internal use" indicator. E.g. from M import * does not import objects whose name starts with an underscore.
If anything, the said Style Guide actually is against using _single_leading_underscore.py in filename. Its Package and Module Names section only mentions such usage when a module is implemented in C/C++.
In general, that _single_leading_underscore notation is typically observed in function names, method names and member variables, to differentiate them from other normal methods.
There is few need (if any at all), to use _single_leading_underscore.py on filename, because the developers are not scrapers , they are unlikely to salvage a file based on its filename. They would just follow a package's highest level of APIs (technically speaking, its exposed entities defined by __all__), therefore all the filenames are not even noticeable, let alone to be a factor, of whether a file (i.e. module) would be used.

Categories

Resources