pylint "Undefined variable" in module written in C++/SIP - python

I export several native C++ classes to Python using SIP. I don't use the resulting maplib_sip.pyd module directly, but rather wrap it in a Python packagepymaplib:
# pymaplib/__init__.py
# Make all of maplib_sip available in pymaplib.
from maplib_sip import *
...
def parse_coordinate(coord_str):
...
# LatLon is a class imported from maplib_sip.
return LatLon(lat_float, lon_float)
Pylint doesn't recognize that LatLon comes from maplib_sip:
error pymaplib parse_coordinate 40 15 Undefined variable 'LatLon'
Unfortunately, the same happens for all the classes from maplib_sip, as well as for most of the code from wxPython (Phoenix) that I use. This effectively makes Pylint worthless for me, as the amount of spurious errors dwarfs the real problems.
additional-builtins doesn't work that well for my problem:
# Both of these don't remove the error:
additional-builtins=maplib_sip.LatLon
additional-builtins=pymaplib.LatLon
# This does remove the error in pymaplib:
additional-builtins=LatLon
# But users of pymaplib still give an error:
# Module 'pymaplib' has no 'LatLon' member
How do I deal with this? Can I somehow tell pylint that maplib_sip.LatLon actually exists? Even better, can it somehow figure that out itself via introspection (which works in IPython, for example)?
I'd rather not have to disable the undefined variable checks, since that's one of the huge benefits of pylint for me.
Program versions:
Pylint 1.2.1,
astroid 1.1.1, common 0.61.0,
Python 3.3.3 [32 bit] on Windows7

you may want to try the new --ignored-modules option, though I'm not sure it will work in your case, beside if you stop using import * (which would probably be a good idea as pylint probably already told you ;).
Rather use short import name, eg import maplib_sip as mls, then prefixed name, eg mls.LatLon where desired.
Notice though that the original problem is worth an issue on pylint tracker (https://bitbucket.org/logilab/pylint/issues) so some investigation will be done to grasp why it doesn't get member of your sip exported module.

Related

Unwanted error in Cython code in calling the functions of c

I am a beginner in Cython and I am following the official documentation of Cython and in that there is a section called "Calling C functions" and in that it is written that how to import c function in .py(python file, as according to the 'Pure Python' section of that page) there is code in 'Pure Python' section when I copied it and run on my code editior 'VS code'. So it show me error that is
ModuleNotFoundError: No module named 'cython.cimports'; 'cython' is not a package
the code which is officially published on the page of documentation of Cython website is here:
from cython.cimports.libc.stdlib import atoi
#cython.cfunc
def parse_charptr_to_py_int(s: cython.p_char):
assert s is not cython.NULL, "byte string value is NULL"
return atoi(s) # note: atoi() has no error detection!
Now, I can't able to understand the problem that why it is occuring when I am following the official documentation of Cython
Note:- I was already Installed the Cython using pip install cython.
the code which I was written in my editior is as follows:
import cython
from cython.cimports.libc.stdlib import atoi
#cython.cfunc
def parse_charptr_to_py_int(s: cython.p_char):
assert s is not cython.NULL, "byte string value is NULL"
return atoi(s) # note: atoi() has no error detection!
if __name__ == '__main__':
result= parse_charptr_to_py_int("Rahul")
print(result)
So, if you know that why the error is occuring and what kind of mistake I has done so please let me know also. it will help me a lot in understanding the concepts of Cython much clear and it will help to understand that how to implement it, please!, if you solve this i will very grateful to you!! Thank You!!
There's a couple of possible issues:
The pure Python mode cimport is only available on the Cython 3 alpha build. You've probably installed the older Cython 0.29.x. If you don't want to use the Cython 3 alpha then you can switch the documentation back to the earlier version: https://cython.readthedocs.io/en/stable/index.html
You need to compile Cython for it to work ("pure Python mode doesn't change that. It just avoids non-Python syntax). See https://cython.readthedocs.io/en/latest/src/quickstart/build.html. Note that the documentation for cimports says
Note that this does not mean that C libraries become available to Python code. It only means that you can tell Cython what cimports you want to use, without requiring special syntax.

Imports failing in VScode for pylint when importing pygame

When importing pygame pylint is going crazy:
E1101:Module 'pygame' has no 'init' member
E1101:Module 'pygame' has no 'QUIT' member
I have searched the net and I have found this:
"python.linting.pylintArgs": ["--ignored-modules=pygame"]
It solves the problem with pygame, but now pylint is going crazy in other way: crazy_pylint.png.
Then I have found "python.linting.pylintArgs": ["--ignored-files=pygame"], but what it does is completely disabling pylint for the whole directory I am working in.
So how do I say pylint that everything is OK with pygame?
For E1101:
The problem is that most of Pygame is implemented in C directly. Now, this is all well and dandy in terms of performance, however, pylint (the linter used by VSCode) is unable to scan these C files.
Unfortunately, these same files define a bunch of useful things, namely QUIT and other constants, such as MOUSEBUTTONDOWN, K_SPACE, etc, as well as functions like init or quit.
To fix this, first things first, stop ignoring the pygame module by removing all your arguments in "python.linting.pylintArgs". Trust me, the linter can come in handy.
Now to fix the problems. For your constants (anything in caps), manually import them like so:
from pygame.constants import (
MOUSEBUTTONDOWN, QUIT, MOUSEMOTION, KEYDOWN
)
You can now use these without prepending them with pygame.:
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
if event.type == KEYDOWN:
# Code
Next, for your init and other functions errors, you can manually help the linter in resolving these, by way of 2 methods:
Either add this somewhere in your code: # pylint: disable=no-member. This will deactivate member validation for the entire file, preventing such errors from being shown.
Or you can encase the line with the error:
# pylint: disable=no-member
pygame.quit()
# pylint: enable=no-member
This is similar to what the first method does, however it limits the effect to only that line.
Finally, for all your other warnings, the solution is to fix them. Pylint is there to show you places in which your code is either pointless, or nonconforming to the Python specs. A quick glance at your screenshot shows for example that your module doesn't have a docstring, that you have declared unused variables...
Pylint is here to aid you in writing concise, clear, and beautiful code. You can ignore these warnings or hide them (with # pylint: disable= and these codes) or spend a little time cleaning up everything.
In the long run, this is the best solution, as it'll make your code more readable and therefore maintainable, and just more pleasing to look at.
For a specific binary module you can whitelist it for pylint. For the pygame module it would be as follows:
{
"python.linting.pylintArgs": [
"--extension-pkg-whitelist=pygame"
]
}
OP You can also maintain the pylint pygame fix you found in vscode by including the vscode default arguments yourself.
The linter is going nuts (crazy_pylint.png) because you were clobbering the default pylint arguments with your own custom python.linting.pylintArgs.
The pygame module ignore fix does work, and the linter can return to non-crazy mode by also including the clobbered default arguments in your own custom python.linting.pylintArgs.
From the docs:
These arguments are passed whenever the python.linting.pylintUseMinimalCheckers is set to true (the default).
If you specify a value in pylintArgs or use a Pylint configuration file (see the next section), then pylintUseMinimalCheckers is implicitly set to false.
The defaults vscode passes according to this: https://code.visualstudio.com/docs/python/linting are:
--disable=all,
--enable=F,E,unreachable,duplicate-key,unnecessary-semicolon,global-variable-not-assigned,unused-variable,binary-op-exception,bad-format-string,anomalous-backslash-in-string,bad-open-mode
So, here is how to pass all those defaults as well as the --ignored-modules=pygame in user settings within vscode:
"python.linting.pylintArgs": [
"--disable=all",
"--enable=F,E,unreachable,duplicate-key,unnecessary-semicolon,global-variable-not-assigned,unused-variable,binary-op-exception,bad-format-string,anomalous-backslash-in-string,bad-open-mode",
"--ignored-modules=pygame"
]
Per #C._ comment above, he's definitely speaking truth; the linter will help!
I'm writing better code with it enabled for sure.
Also, I discovered that you can further fine-tune your pylinter with the enable line and comma delimited "readable pylint messages"
listed here: https://github.com/janjur/readable-pylint-messages/blob/master/README.md
So to not ignore also trailing-newlines, you would append the enable= list argument to include simply trailing-newlines.
I really hope this helps you OP :) It helped me!
Thanks for asking the question, and sharing --ignored-modules.

Scapy module blocks PyCharm debugger

Im working on a project in PyCharm, and I need to debug certain part of the code.
When I tried to debug, the debugger just "skipped" the breakpoints without stopping at them.
After a lot of non-helpful tries in the web, I found that when I import the Scapy module, the debugger doesn't work, and when Scapy isn't imported, everything works just FINE.
Btw - Im working on Ubuntu OS.
Any ideas??
Came across this problem myself. It is very annoying. After much debugging, got to an answer.
The cause of the problem seems to be the way scapy imports everything into the global namespace and this seems to break PyCharm (name clash, perhaps?).
By the way, this all applies to v2.3.3 of scapy from 18th October, 2016.
As scapy is loading, it eventually hits a line in scapy/all.py:
from scapy.layers.all import *
This loads scapy/layers/all.py which loads scapy/config.py. This last file initialises Conf.load_layers[] to a list of modules (in scapy/layers).
scapy/layers/all.py then loops through this list, calling _import_star() on each module.
After it loads scapy/layers/x509.py, all breakpoints in PyCharm stop working.
I've FOUR solutions for you, pick the one you like best ...
(1) If you don't use anything to do with X509, you could simply remove this module from the list assigned to Conf.load_layers[] in scapy/config.py (line 383 in my copy of config.py). WARNING: THIS IS A REAL HACK - please avoid doing it unless there is no other way forward for you.
If you need to temporally debug, you can also use this code sample:
from scapy import config
config.Conf.load_layers.remove("x509")
from scapy.all import *
(2) The problem is with symbols being imported into the global namespace. This is fine for classes, bad for constants. There is code in _import_star() that checks the name of the symbol and does NOT load it into the global namespace if it begins with a _ (i.e. a "private" name). You could modify this function to treat the x509 module specially by ignoring names that do not begin X509_. Hopefully this will import the classes defined in x509 and not the constants. Here is a sample patch:
*** layers/all.py 2017-03-31 12:44:00.673248054 +0100
--- layers/all.py 2017-03-31 12:44:00.673248054 +0100
***************
*** 21,26 ****
--- 21,32 ----
for name in mod.__dict__['__all__']:
__all__.append(name)
globals()[name] = mod.__dict__[name]
+ elif m == "x509":
+ # import but rename as we go ...
+ for name, sym in mod.__dict__.iteritems():
+ if name[0] != '_' and name[:5] != "X509_":
+ __all__.append("_x509_" + name)
+ globals()["_x509_" + name] = sym
else:
# import all the non-private symbols
for name, sym in mod.__dict__.iteritems():
WARNING: THIS IS A REAL HACK - please avoid doing it unless there is no other way forward for you.
(3) This is a variation on solution (2), so also A REAL HACK (etc. etc.). You could edit scapy/layers/x509.py and prepend a _ to all constants. For example, all instances of default_directoryName should be changed to _default_directoryName. I found the following constants that needed changing: default_directoryName, reasons_mapping, cRL_reasons, ext_mapping, default_issuer, default_subject, attrName_mapping and attrName_specials. This is nice as it matches a fix applied to x509.py that I found in the scapy git repo ...
(4) You could just update to the next version of scapy. I don't know if this will be v2.3.4 or v2.4, as there is (at the time of writing) no next version released yet. So, while this (lack of a new release) continues, you could update to the latest development version (where they have already fixed this problem on Feb 8th 2017). I use scapy installed under my home directory (rather than in the system python packages location), so I did the following:
pip uninstall scapy
git clone https://github.com/secdev/scapy /tmp/scapy
cd /tmp/scapy
python setup.py install --user
cd -
rm -rf /tmp/scapy
Good luck !
I can not comment on Spiceisland's response because of lack of reputation points but with current version of scapy 2.3.3.dev532 I can see the same issues with tls layer as pointed out by Spiceisland with x509. Therefore all workarounds and fixes have to be applied accordingly for tls module.
So simplest quick and dirty fix (and you won't be able to use TLS after that):
In scapy/config.py remove "tls" element from load_layers list (that's line 434 in the 2.3.3.dev532 version of scapy)
I have also filed a bug for this issue https://github.com/secdev/scapy/issues/746

Disable all Pylint warnings for a file

We are using Pylint within our build system.
We have a Python package within our code base that has throwaway code, and I'd like to disable all warnings for a module temporarily so I can stop bugging the other devs with these superfluous messages. Is there an easy way to pylint: disable all warnings for a module?
From the Pylint FAQ:
With Pylint < 0.25, add
# pylint: disable-all
at the beginning of the module.
Pylint 0.26.1 and up have renamed that directive to
# pylint: skip-file
(but the first version will be kept for backward compatibility).
In order to ease finding which modules are ignored a information-level message I0013 is emitted. With recent versions of Pylint, if you use the old syntax, an additional I0014 message is emitted.
Pylint has five "categories" for messages (of which I'm aware).
These categories were very obvious in the past, but the numbered Pylint messages have now been replaced by names. For example, C0302 is now too-many-lines. But the 'C' tells us that too-many-lines is a Convention message. This is confusing, because Convention messages frequently just show up as a warning, since many systems (such as Syntastic) seem to classify everything as either a warning or an error. However, the Pylint report still breaks things down into these categories, so it's still definitely supported.
Your question specifically refers to Warnings, and all Pylint Warning message names start with 'W'.
It was a bit difficult for me to track this down, but this answer eventually led me to the answer. Pylint still supports disabling entire categories of messages. So, to disable all Warnings, you would do:
disable=W
This can be used at the command-line:
$ pylint --disable=W myfile.py
Or, you can put it in your pylintrc file:
[MESSAGES CONTROL]
disable=W
Note: you may already have the disable option in your rc file, in which case you should append the 'W' to this list.
Or, you can put it inline in your code, where it will work for the scope into which it is placed:
# pylint: disable=W
To disable it for an entire file, it's best to put it at the very top of the file. However, even at the very top of the file, I found I was still getting the trailing-newlines warning message (technically a convention warning, but I'm getting to that).
In my case, I had a library written by someone from long ago. It worked well, so there was really no need to worry about modern Python convention, etc. All I really cared about were the errors that would likely break my code.
My solution was to disable all the Warning, Convention, and Refactoring messages for this one file only by placing the following Pylint command on the first line:
# pylint: disable=W,C,R
Aside from the aforementioned message for trailing newlines, this did exactly what I needed.
Yes, you can specify # pylint: skip-file in the file, but it is bad practice to disable all warnings for a file.
If you want to disable specific warnings only, this can be done by adding a comment such as # pylint: disable=message-name to disable the specified message for the remainder of the file, or at least until # pylint: enable=message-name.
Example:
# pylint: disable=no-member
class C123:
def __init__(self):
self.foo_x = self.bar_x
# pylint: enable=no-member
class C456:
def __init__(self):
self.foo_x = self.bar_x
Another option is to use the --ignore command line option to skip analysis for some files.
My use case is to run pylint *.py to process all files in a directory, except that I want to skip one particular file.
Adding # pylint: skip-file caused Pylint to fail with I: 8, 0: Ignoring entire file (file-ignored). Adding # pylint: disable=file-ignored does not fix that. Presumably, it's a global error rather than a file-specific one.
The solution was to include --disable=file-ignored in the Pylint command options. It took way too long to figure this out; there shouldn't be a file-ignored error when you explicitly ignore a file.
I just did this for pytest tests organised in to classes and it may be useful to someone. These were throwing up lots of errors but I like the class based structure when I'm running in the cli.
I add this below imports on all my test files and it seems to work. Might be nicer to do it once for the test directory if anyone knows the solution for that, but I don't like to ignore for the whole repo if I can avoid it
# Disable pylint errors for tests organised into classes
# pylint: disable=no-self-use too-few-public-methods
# Disable pylint errors for pytest fixtures
# pylint: disable=unused-argument invalid-name
Using it twice didn't break anything, I checked to make sure. Initially I tried brackets for a 2 line disable but that didn't work. I didn't need commas so removed them.
If anyone knows of a decorator that can mark organisational test classes equivalent to #dataclass let me know.

Getting python -m module to work for a module implemented in C

I have a pure C module for Python and I'd like to be able to invoke it using the python -m modulename approach. This works fine with modules implemented in Python and one obvious workaround is to add an extra file for that purpose. However I really want to keep things to my one single distributed binary and not add a second file just for this workaround.
I don't care how hacky the solution is.
If you do try to use a C module with -m then you get an error message No code object available for <modulename>.
-m implementation is in runpy._run_module_as_main . Its essence is:
mod_name, loader, code, fname = _get_module_details(mod_name)
<...>
exec code in run_globals
A compiled module has no "code object" accociated with it so the 1st statement fails with ImportError("No code object available for <module>"). You need to extend runpy - specifically, _get_module_details - to make it work for a compiled module. I suggest returning a code object constructed from the aforementioned "import mod; mod.main()":
(python 2.6.1)
code = loader.get_code(mod_name)
if code is None:
+ if loader.etc[2]==imp.C_EXTENSION:
+ code=compile("import %(mod)s; %(mod)s.main()"%{'mod':mod_name},"<extension loader wrapper>","exec")
+ else:
+ raise ImportError("No code object available for %s" % mod_name)
- raise ImportError("No code object available for %s" % mod_name)
filename = _get_filename(loader, mod_name)
(Update: fixed an error in format string)
Now...
C:\Documents and Settings\Пользователь>python -m pythoncom
C:\Documents and Settings\Пользователь>
This still won't work for builtin modules. Again, you'll need to invent some notion of "main code unit" for them.
Update:
I've looked through the internals called from _get_module_details and can say with confidence that they don't even attempt to retrieve a code object from a module of type other than imp.PY_SOURCE, imp.PY_COMPILED or imp.PKG_DIRECTORY . So you have to patch this machinery this way or another for -m to work. Python fails before retrieving anything from your module (it doesn't even check if the dll is a valid module) so you can't do anything by building it in a special way.
Does your requirement of single distributed binary allow for the use of an egg? If so, you could package your module with a __main__.py with your calling code and the usual __init__.py...
If you're really adamant, maybe you could extend pkgutil.ImpLoader.get_code to return something for C modules (e.g., maybe a special __code__ function). To do that, I think you're going to have to actually change it in the Python source. Even then, pkgutil uses exec to execute the code block, so it would have to be Python code anyway.
TL;DR: I think you're euchred. While Python modules have code at the global level that runs at import time, C modules don't; they're mostly just a dict namespace. Thus, running a C module doesn't really make sense from a conceptual standpoint. You need some real Python code to direct the action.
I think that you need to start by making a separate file in Python and getting the -m option to work. Then, turn that Python file into a code object and incorporate it into your binary in such a way that it continues to work.
Look up setuptools in PyPi, download the .egg and take a look at the file. You will see that the first few bytes contain a Python script and these are followed by a .ZIP file bytestream. Something similar may work for you.
There's a brand new thing that may solve your problems easily. I've just learnt about it and it looks preety decent to me: http://code.google.com/p/pts-mini-gpl/wiki/StaticPython

Categories

Resources