Scapy module blocks PyCharm debugger - python

Im working on a project in PyCharm, and I need to debug certain part of the code.
When I tried to debug, the debugger just "skipped" the breakpoints without stopping at them.
After a lot of non-helpful tries in the web, I found that when I import the Scapy module, the debugger doesn't work, and when Scapy isn't imported, everything works just FINE.
Btw - Im working on Ubuntu OS.
Any ideas??

Came across this problem myself. It is very annoying. After much debugging, got to an answer.
The cause of the problem seems to be the way scapy imports everything into the global namespace and this seems to break PyCharm (name clash, perhaps?).
By the way, this all applies to v2.3.3 of scapy from 18th October, 2016.
As scapy is loading, it eventually hits a line in scapy/all.py:
from scapy.layers.all import *
This loads scapy/layers/all.py which loads scapy/config.py. This last file initialises Conf.load_layers[] to a list of modules (in scapy/layers).
scapy/layers/all.py then loops through this list, calling _import_star() on each module.
After it loads scapy/layers/x509.py, all breakpoints in PyCharm stop working.
I've FOUR solutions for you, pick the one you like best ...
(1) If you don't use anything to do with X509, you could simply remove this module from the list assigned to Conf.load_layers[] in scapy/config.py (line 383 in my copy of config.py). WARNING: THIS IS A REAL HACK - please avoid doing it unless there is no other way forward for you.
If you need to temporally debug, you can also use this code sample:
from scapy import config
config.Conf.load_layers.remove("x509")
from scapy.all import *
(2) The problem is with symbols being imported into the global namespace. This is fine for classes, bad for constants. There is code in _import_star() that checks the name of the symbol and does NOT load it into the global namespace if it begins with a _ (i.e. a "private" name). You could modify this function to treat the x509 module specially by ignoring names that do not begin X509_. Hopefully this will import the classes defined in x509 and not the constants. Here is a sample patch:
*** layers/all.py 2017-03-31 12:44:00.673248054 +0100
--- layers/all.py 2017-03-31 12:44:00.673248054 +0100
***************
*** 21,26 ****
--- 21,32 ----
for name in mod.__dict__['__all__']:
__all__.append(name)
globals()[name] = mod.__dict__[name]
+ elif m == "x509":
+ # import but rename as we go ...
+ for name, sym in mod.__dict__.iteritems():
+ if name[0] != '_' and name[:5] != "X509_":
+ __all__.append("_x509_" + name)
+ globals()["_x509_" + name] = sym
else:
# import all the non-private symbols
for name, sym in mod.__dict__.iteritems():
WARNING: THIS IS A REAL HACK - please avoid doing it unless there is no other way forward for you.
(3) This is a variation on solution (2), so also A REAL HACK (etc. etc.). You could edit scapy/layers/x509.py and prepend a _ to all constants. For example, all instances of default_directoryName should be changed to _default_directoryName. I found the following constants that needed changing: default_directoryName, reasons_mapping, cRL_reasons, ext_mapping, default_issuer, default_subject, attrName_mapping and attrName_specials. This is nice as it matches a fix applied to x509.py that I found in the scapy git repo ...
(4) You could just update to the next version of scapy. I don't know if this will be v2.3.4 or v2.4, as there is (at the time of writing) no next version released yet. So, while this (lack of a new release) continues, you could update to the latest development version (where they have already fixed this problem on Feb 8th 2017). I use scapy installed under my home directory (rather than in the system python packages location), so I did the following:
pip uninstall scapy
git clone https://github.com/secdev/scapy /tmp/scapy
cd /tmp/scapy
python setup.py install --user
cd -
rm -rf /tmp/scapy
Good luck !

I can not comment on Spiceisland's response because of lack of reputation points but with current version of scapy 2.3.3.dev532 I can see the same issues with tls layer as pointed out by Spiceisland with x509. Therefore all workarounds and fixes have to be applied accordingly for tls module.
So simplest quick and dirty fix (and you won't be able to use TLS after that):
In scapy/config.py remove "tls" element from load_layers list (that's line 434 in the 2.3.3.dev532 version of scapy)
I have also filed a bug for this issue https://github.com/secdev/scapy/issues/746

Related

Problem with Unit Testing Exercise, module not found

I am stuck on an exercise. These are the files I was given:
A readme file:
Before you begin, make sure to run this command in your terminal to install pytest:
pip install -U pytest
Then, to run pytest, just enter:
pytest
Right now, not all of the tests should pass. Fix the function to pass all its tests! Once all your tests pass, try writing some additional unit tests of your own!
A "compute-launch.py" file:
def days_until_launch(current_day, launch_day):
""""Returns the days left before launch.
current_day (int) - current day in integer
launch_day (int) - launch day in integer
"""
return launch_day - current_day
A "test-compute-launch.py" file:
from compute_launch import days_until_launch
def test_days_until_launch_4():
assert(days_until_launch(22, 26) == 4)
def test_days_until_launch_0():
assert(days_until_launch(253, 253) == 0)
def test_days_until_launch_0_negative():
assert(days_until_launch(83, 64) == 0)
def test_days_until_launch_1():
assert(days_until_launch(9, 10) == 1)
This is my problem:
ModuleNotFoundError: No module named 'compute_launch'
I have tried looking at other Stack Overflow threads which contain the same error "no module named" but I was not able to understand how this problem could be fixed. I have installed pytest. I need to be able to run the tests so I can see which tests are working and which are not. I don't need help with fixing or writing unit tests, I only need help with how to run the file to do the unit tests.
Thank you.
You have saved the file as compute-launch.py but you are importing the function from compute_launch.
Notice that one has a hyphen while the other has an underscore.
The file name you used is invalid. As stated in PEP 8:
Package and Module Names
Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged.
You have to rename the file and remove the dash. So change it from:
compute-launch.py
To:
compute_launch.py
Same with your test file test-compute-launch.py. The import should stay the same:
from compute_launch import days_until_launch

What does it mean to "initialize the Julia runtime" when exporting compiled .dll or .so files for use in other langauges?

I'm trying to compile a usable .dll file from Julia to be used in Python as I've already written a large GUI in Python and need some fast optimization work done. Normally I would just call PyJulia or some "live" call, however this program needs to be compiled to distribute within my research team, so whatever solution I end up with needs to be able to run on its own (without Julia or Python actually installed).
Right now I'm able to create .dll files via PackageCompiler.jl, something I learned from previous posts on StackOverflow, however when trying to run these files in Python via the following code
Julia mock package
module JuliaFunctions
# Pkg.add("BlackBoxOptim")
Base.#ccallable function my_main_function(x::Cfloat,y::Cfloat)::Cfloat
z = 0
for i in 1:x
z += i ^ y
end
return z
end
# function julia_main()
# print("Hello from a compiled executable!")
# end
export my_main_function
end # module
Julia script to use PackageCompiler
# using PackageCompiler
using Pkg
# Pkg.develop(path="JuliaFunctions") # This is how you add a local package
# include("JuliaFunctions/src/JuliaFunctions.jl") # this is how you add a local module
using PackageCompiler
# Pkg.add(path="JuliaFunctions")
#time create_sysimage(:JuliaFunctions, sysimage_path="JuliaFunctions.dll")
Trying to use the resulting .dll in CTypes in Python
import ctypes
from ctypes.util import find_library
from ctypes import *
path = os.path.dirname(os.path.realpath(__file__)) + '\\JuliaFunctions.dll'
# _lib = cdll.LoadLibrary(ctypes.util.find_library(path)) # same error
# hllDll = ctypes.WinDLL(path, winmode=0) # same error
with os.add_dll_directory(os.path.dirname(os.path.realpath(__file__))):
_lib = ctypes.CDLL(path, winmode=0)
I get
OSError: [WinError 127] The specified procedure could not be found
With my current understanding, this means that CTypes found the dll and imported it, but didn't find.. something? I've yet to fully grasp how this behaves.
I've verified the function my_main_function is exported in the .dll file via Nirsoft's DLL Export Viewer. Users from previous similar issues have noted that this sysimage is already callable and should work, but they always add at the end something along the lines of "Note that you will also in general need to initialize the Julia runtime."
What does this mean? Is this even something that can be done independently from the Julia installation? The dev docs in PackageCompiler mention this, however they just mention that julia_main is automatically included in the .dll file and gets called as a sort of launch point. This function is also being exported correctly into the .dll file the above code creates. Below is an image of the Nirsoft export viewer output for reference.
Edit 1
Inexplicably, I've rebuilt this .dll on another machine and made progress. Now, the dll is imported correctly. I'm not sure yet why this worked on a fresh Julia install + Python venv, but I'm going to reinstall them on the other one and update this if anything changes. For anyone encountering this, also note you need to specify the expected output, whatever it may be. In my case this is done by adding (after the import):
_lib.testmethod1.restype = c_double # switched from Cfloat earlier, a lot has changed.
_lib.testmethod1.argtypes = [c_double, c_double] # (defined by ctypes)
The current error is now OSError: exception: access violation writing 0x0000000000000024 when trying to actually use the function, which is specific to Python. Any help on this would also be appreciated.

Unexpected keyword arg in decorator : Python [duplicate]

I'm trying to disable warning C0321 ("more than one statement on a single line" -- I often put if statements with short single-line results on the same line), in Pylint 0.21.1 (if it matters: astng 0.20.1, common 0.50.3, and Python 2.6.6 (r266:84292, Sep 15 2010, 16:22:56)).
I've tried adding disable=C0321 in the Pylint configuration file, but Pylint insists on reporting it anyway. Variations on that line (like disable=0321 or disable=C321) are flagged as errors, so Pylint does recognize the option properly. It's just ignoring it.
Is this a Pylint bug, or am I doing something wrong? Is there a way around this?
I'd really like to get rid of some of this noise.
pylint --generate-rcfile shows it like this:
[MESSAGES CONTROL]
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time.
#enable=
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once).
#disable=
So it looks like your ~/.pylintrc should have the disable= line/s in it inside a section [MESSAGES CONTROL].
Starting from Pylint v. 0.25.3, you can use the symbolic names for disabling warnings instead of having to remember all those code numbers. E.g.:
# pylint: disable=locally-disabled, multiple-statements, fixme, line-too-long
This style is more instructive than cryptic error codes, and also more practical since newer versions of Pylint only output the symbolic name, not the error code.
The correspondence between symbolic names and codes can be found here.
A disable comment can be inserted on its own line, applying the disable to everything that comes after in the same block. Alternatively, it can be inserted at the end of the line for which it is meant to apply.
If Pylint outputs "Locally disabling" messages, you can get rid of them by including the disable locally-disabled first as in the example above.
I had this problem using Eclipse and solved it as follows:
In the pylint folder (e.g. C:\Python26\Lib\site-packages\pylint), hold Shift, right-click and choose to open the windows command in that folder. Type:
lint.py --generate-rcfile > standard.rc
This creates the standard.rc configuration file. Open it in Notepad and under [MESSAGES CONTROL], uncomment
disable= and add the message ID's you want to disable, e.g.:
disable=W0511, C0321
Save the file, and in Eclipse → Window → Preferences → PyDev → *pylint, in the arguments box, type:
--rcfile=C:\Python26\Lib\site-packages\pylint\standard.rc
Now it should work...
You can also add a comment at the top of your code that will be interpreted by Pylint:
# pylint: disable=C0321
Pylint message codes.
Adding e.g. --disable-ids=C0321 in the arguments box does not work.
All available Pylint messages are stored in the dictionary _messages, an attribute of an instance of the pylint.utils.MessagesHandlerMixIn class. When running Pylint with the argument --disable-ids=... (at least without a configuration file), this dictionary is initially empty, raising a KeyError exception within Pylint (pylint.utils.MessagesHandlerMixIn.check_message_id().
In Eclipse, you can see this error-message in the Pylint Console (windows* → show view → Console, select Pylint console from the console options besides the console icon.)
To disable a warning locally in a block, add
# pylint: disable=C0321
to that block.
There are several ways to disable warnings & errors from Pylint. Which one to use has to do with how globally or locally you want to apply the disablement -- an important design decision.
Multiple Approaches
In one or more pylintrc files.
This involves more than the ~/.pylintrc file (in your $HOME directory) as described by Chris Morgan. Pylint will search for rc files, with a precedence that values "closer" files more highly:
A pylintrc file in the current working directory; or
If the current working directory is in a Python module (i.e. it contains an __init__.py file), searching up the hierarchy of Python modules until a pylintrc file is found; or
The file named by the environment variable PYLINTRC; or
If you have a home directory that isn’t /root:
~/.pylintrc; or
~/.config/pylintrc; or
/etc/pylintrc
Note that most of these files are named pylintrc -- only the file in ~ has a leading dot.
To your pylintrc file, add lines to disable specific pylint messages. For example:
[MESSAGES CONTROL]
disable=locally-disabled
Further disables from the pylint command line, as described by Aboo and Cairnarvon. This looks like pylint --disable=bad-builtin. Repeat --disable to suppress additional items.
Further disables from individual Python code lines, as described by Imolit. These look like some statement # pylint: disable=broad-except (extra comment on the end of the original source line) and apply only to the current line. My approach is to always put these on the end of other lines of code so they won't be confused with the block style, see below.
Further disables defined for larger blocks of Python code, up to complete source files.
These look like # pragma pylint: disable=bad-whitespace (note the pragma key word).
These apply to every line after the pragma. Putting a block of these at the top of a file makes the suppressions apply to the whole file. Putting the same block lower in the file makes them apply only to lines following the block. My approach is to always put these on a line of their own so they won't be confused with the single-line style, see above.
When a suppression should only apply within a span of code, use # pragma pylint: enable=bad-whitespace (now using enable not disable) to stop suppressing.
Note that disabling for a single line uses the # pylint syntax while disabling for this line onward uses the # pragma pylint syntax. These are easy to confuse especially when copying & pasting.
Putting It All Together
I usually use a mix of these approaches.
I use ~/.pylintrc for absolutely global standards -- very few of these.
I use project-level pylintrc at different levels within Python modules when there are project-specific standards. Especially when you're taking in code from another person or team, you may find they use conventions that you don't prefer, but you don't want to rework the code. Keeping the settings at this level helps not spread those practices to other projects.
I use the block style pragmas at the top of single source files. I like to turn the pragmas off (stop suppressing messages) in the heat of development even for Pylint standards I don't agree with (like "too few public methods" -- I always get that warning on custom Exception classes) -- but it's helpful to see more / maybe all Pylint messages while you're developing. That way you can find the cases you want to address with single-line pragmas (see below), or just add comments for the next developer to explain why that warning is OK in this case.
I leave some of the block-style pragmas enabled even when the code is ready to check in. I try to use few of those, but when it makes sense for the module, it's OK to do as documentation. However I try to leave as few on as possible, preferably none.
I use the single-line-comment style to address especially potent errors. For example, if there's a place where it actually makes sense to do except Exception as exc, I put the # pylint: disable=broad-except on that line instead of a more global approach because this is a strange exception and needs to be called out, basically as a form of documentation.
Like everything else in Python, you can act at different levels of indirection. My advice is to think about what belongs at what level so you don't end up with a too-lenient approach to Pylint.
This is a FAQ:
4.1 Is it possible to locally disable a particular message?
Yes, this feature has been added in Pylint 0.11. This may be done by
adding
# pylint: disable=some-message,another-one at the desired
block level or at the end of the desired line of code.
4.2 Is there a way to disable a message for a particular module only?
Yes, you can disable or enable (globally disabled) messages at the
module level by adding the corresponding option in a comment at the
top of the file:
# pylint: disable=wildcard-import, method-hidden
# pylint: enable=too-many-lines
You can disable messages by:
numerical ID: E1101, E1102, etc.
symbolic message: no-member, undefined-variable, etc.
the name of a group of checks. You can grab those with pylint --list-groups.
category of checks: C, R, W, etc.
all the checks with all.
See the documentation (or run pylint --list-msgs in the terminal) for the full list of Pylint's messages. The documentation also provide a nice example of how to use this feature.
You can also use the following command:
pylint --disable=C0321 test.py
My Pylint version is 0.25.1.
You just have to add one line to disable what you want to disable.
E.g.,
#pylint: disable = line-too-long, too-many-lines, no-name-in-module, import-error, multiple-imports, pointless-string-statement, wrong-import-order
Add this at the very beginning of your module.
In case this helps someone, if you're using Visual Studio Code, it expects the file to be in UTF-8 encoding. To generate the file, I ran pylint --generate-rcfile | out-file -encoding utf8 .pylintrc in PowerShell.
As per Pylint documentation, the easiest is to use this chart:
C convention-related checks
R refactoring-related checks
W various warnings
E errors, for probable bugs in the code
F fatal, if an error occurred which prevented Pylint from doing further processing.
So one can use:
pylint -j 0 --disable=I,E,R,W,C,F YOUR_FILES_LOC
Sorry for diverging a bit from the initial question, about poster's general preference, which would be better addressed by a global configuration file.
But, as in many popular answers, I tend to prefer seeing in my code what could trigger warnings, and eventually inform contributors as well.
My comment to answer from #imolit needs to stay short, here are some details.
For multiple-statements message, it's probably better to disable it at block or module level, like this
# pylint: disable=multiple-statements
My use-case being now attribute-defined-outside-init in a unittest setup(), I opted for a line-scoped message disabling, using the message code to avoid the line-too-long issue.
class ParserTest(unittest.TestCase):
def setUp(self):
self.parser = create_parser() # pylint: disable=W0201
The correspondance can be found locally with a command like
$ pylint --list-msgs | grep 'outside-init'
:attribute-defined-outside-init (W0201): *Attribute %r defined outside __init__*
Of course, you would similarly retrieve the symbolic name from the code.
Python syntax does permit more than one statement on a line, separated by semicolon (;). However, limiting each line to one statement makes it easier for a human to follow a program's logic when reading through it.
So, another way of solving this issue, is to understand why the lint message is there and not put more than one statement on a line.
Yes, you may find it easier to write multiple statements per line, however, Pylint is for every other reader of your code not just you.
My pylint kept ignoring the disable list in my .pylintrc. Finally, I realized that I was executing:
pylint --disable=all --enable=F,E,W
which was overriding the disable list in my .pylintrc.
The correct command to show only Fatal, Errors, Warnings, is:
pylint --disable=C,R
Edit "C:\Users\Your User\AppData\Roaming\Code\User\settings.json"
and add 'python.linting.pylintArgs' with its lines at the end as shown below:
{
"team.showWelcomeMessage": false,
"python.dataScience.sendSelectionToInteractiveWindow": true,
"git.enableSmartCommit": true,
"powershell.codeFormatting.useCorrectCasing": true,
"files.autoSave": "onWindowChange",
"python.linting.pylintArgs": [
"--load-plugins=pylint_django",
"--errors-only"
],
}

pylint "Undefined variable" in module written in C++/SIP

I export several native C++ classes to Python using SIP. I don't use the resulting maplib_sip.pyd module directly, but rather wrap it in a Python packagepymaplib:
# pymaplib/__init__.py
# Make all of maplib_sip available in pymaplib.
from maplib_sip import *
...
def parse_coordinate(coord_str):
...
# LatLon is a class imported from maplib_sip.
return LatLon(lat_float, lon_float)
Pylint doesn't recognize that LatLon comes from maplib_sip:
error pymaplib parse_coordinate 40 15 Undefined variable 'LatLon'
Unfortunately, the same happens for all the classes from maplib_sip, as well as for most of the code from wxPython (Phoenix) that I use. This effectively makes Pylint worthless for me, as the amount of spurious errors dwarfs the real problems.
additional-builtins doesn't work that well for my problem:
# Both of these don't remove the error:
additional-builtins=maplib_sip.LatLon
additional-builtins=pymaplib.LatLon
# This does remove the error in pymaplib:
additional-builtins=LatLon
# But users of pymaplib still give an error:
# Module 'pymaplib' has no 'LatLon' member
How do I deal with this? Can I somehow tell pylint that maplib_sip.LatLon actually exists? Even better, can it somehow figure that out itself via introspection (which works in IPython, for example)?
I'd rather not have to disable the undefined variable checks, since that's one of the huge benefits of pylint for me.
Program versions:
Pylint 1.2.1,
astroid 1.1.1, common 0.61.0,
Python 3.3.3 [32 bit] on Windows7
you may want to try the new --ignored-modules option, though I'm not sure it will work in your case, beside if you stop using import * (which would probably be a good idea as pylint probably already told you ;).
Rather use short import name, eg import maplib_sip as mls, then prefixed name, eg mls.LatLon where desired.
Notice though that the original problem is worth an issue on pylint tracker (https://bitbucket.org/logilab/pylint/issues) so some investigation will be done to grasp why it doesn't get member of your sip exported module.

Getting python -m module to work for a module implemented in C

I have a pure C module for Python and I'd like to be able to invoke it using the python -m modulename approach. This works fine with modules implemented in Python and one obvious workaround is to add an extra file for that purpose. However I really want to keep things to my one single distributed binary and not add a second file just for this workaround.
I don't care how hacky the solution is.
If you do try to use a C module with -m then you get an error message No code object available for <modulename>.
-m implementation is in runpy._run_module_as_main . Its essence is:
mod_name, loader, code, fname = _get_module_details(mod_name)
<...>
exec code in run_globals
A compiled module has no "code object" accociated with it so the 1st statement fails with ImportError("No code object available for <module>"). You need to extend runpy - specifically, _get_module_details - to make it work for a compiled module. I suggest returning a code object constructed from the aforementioned "import mod; mod.main()":
(python 2.6.1)
code = loader.get_code(mod_name)
if code is None:
+ if loader.etc[2]==imp.C_EXTENSION:
+ code=compile("import %(mod)s; %(mod)s.main()"%{'mod':mod_name},"<extension loader wrapper>","exec")
+ else:
+ raise ImportError("No code object available for %s" % mod_name)
- raise ImportError("No code object available for %s" % mod_name)
filename = _get_filename(loader, mod_name)
(Update: fixed an error in format string)
Now...
C:\Documents and Settings\Пользователь>python -m pythoncom
C:\Documents and Settings\Пользователь>
This still won't work for builtin modules. Again, you'll need to invent some notion of "main code unit" for them.
Update:
I've looked through the internals called from _get_module_details and can say with confidence that they don't even attempt to retrieve a code object from a module of type other than imp.PY_SOURCE, imp.PY_COMPILED or imp.PKG_DIRECTORY . So you have to patch this machinery this way or another for -m to work. Python fails before retrieving anything from your module (it doesn't even check if the dll is a valid module) so you can't do anything by building it in a special way.
Does your requirement of single distributed binary allow for the use of an egg? If so, you could package your module with a __main__.py with your calling code and the usual __init__.py...
If you're really adamant, maybe you could extend pkgutil.ImpLoader.get_code to return something for C modules (e.g., maybe a special __code__ function). To do that, I think you're going to have to actually change it in the Python source. Even then, pkgutil uses exec to execute the code block, so it would have to be Python code anyway.
TL;DR: I think you're euchred. While Python modules have code at the global level that runs at import time, C modules don't; they're mostly just a dict namespace. Thus, running a C module doesn't really make sense from a conceptual standpoint. You need some real Python code to direct the action.
I think that you need to start by making a separate file in Python and getting the -m option to work. Then, turn that Python file into a code object and incorporate it into your binary in such a way that it continues to work.
Look up setuptools in PyPi, download the .egg and take a look at the file. You will see that the first few bytes contain a Python script and these are followed by a .ZIP file bytestream. Something similar may work for you.
There's a brand new thing that may solve your problems easily. I've just learnt about it and it looks preety decent to me: http://code.google.com/p/pts-mini-gpl/wiki/StaticPython

Categories

Resources