gevent monkey-patching and breakpoints - python

I have been playing with Gevent, and I like it a lot. However I have run into a problem. Breakpoint are not being hit, and debugging doesn't work (using both Visual Studio Python Tools and Eclipse PyDev). This happens after monkey.patch_all() is called.
This is a big problem for me, and unfortunately this is a blocker for the use of gevent. I have found a few threads that seem to indicate that gevent breaks debugging, but I would imagine there is a solution for that.
Does anyone know how to make debugging and breakpoints work with gevent and monkey patching?

PyCharm IDE solves the problem. It supports gevent code debugging after you set a configuration flag: http://blog.jetbrains.com/pycharm/2012/08/gevent-debug-support/.
Unfortunately, at the moment I don't know a free tool capable of debugging gevent.
UPD: THERE IS! Now there is a community version of PyCharm.

pdb - The Python Debugger
import pdb
pdb.set_trace() # Place this where you want to drop into the python interpreter.

While debugging in VS Code,
I was getting this error:
It seems that the gevent monkey-patching is being used. Please set an
environment variable with: GEVENT_SUPPORT=True to enable gevent
support in the debugger.
To do this, in the debug launch.json settings, I set the following:
"env": {
"GEVENT_SUPPORT": "True"
},

The simplest, most dangerous solution would be to monkey patch stdin and stdout:
import gevent.monkey
gevent.monkey.patch_all(sys=True)
def my_app():
# ... some code here
import pdb
pdb.set_trace()
# ... some more code here
my_app()
This is quite dangerous since you risk stdin/stdout behaving in a strange way
during the rest of your app lifetime.
Instead you can use a library that I wrote: gtools.pdb. It minimizes
the risk to the pdb prompt only:
def my_app():
# ... some code here
import gtools.pdb
gtools.pdb.set_trace()
# ... some more code here
my_app()
Basically, what it does is tell pdb to use a non-blocking stdin and stdout for
its interactive prompt. Any running greenlets will still continue to run
in the background.
If you want to avoid the dependency, all you need to do is tell pdb to use
a gevent friendly stdin and stdout with something like this:
import sys
from gevent.fileobject import FileObjectThread as File
def Pdb():
import pdb
return pdb.Pdb(stdin=File(sys.stdin), stdout=File(sys.stdout))
def my_app():
# ... some code here
Pdb().set_trace()
# ... some more code here
my_app()
Note that with any of these solutions, you loose Key-up, Key-down pdb prompt facilites see gevent issue patching stdin/stdout.

I use Pycharm 2.7.3 currently and I too was having issues with gevent 0.13.8 breaking debugging. However when I updated to gevent 1.0rc3 I found I could debug again properly.
Sidenote:
I only just now learned that Jetbrains had a workaround with the config flag. I was getting around the problem when I needed to debug with the following hack. I honestly don't know why it worked nor what the negative consequences were. I just did a little trial and error and this happened to allow debugging to work when using grequests.
# overrides the monkeypatch issue which causes debugging in PyDev to not work.
def patch_time():
return
import gevent.monkey
gevent.monkey.patch_time = patch_time
import grequests

Related

PyCharm can't find queue.SimpleQueue

Using pycharm with python 3.7. I am using queue.SimpleQueue. The code runs fine, and PyCharm is pointed at the correct interpreter and all that. But with this code:
import queue
Q = queue.SimpleQueue()
I get a warning "Cannot find reference 'SimpleQueue' in 'queue.pyi'".
I do some searching. I hit ctrl-B on the "import queue" statement and it takes me to a file called queue.pyi in the folder helpers/typeshed/stdlib/3/ under the pycharm installation. So apparently instead of the queue.py file in lib/python3.7/ under the python venv, it thinks I'm trying to import this queue.pyi file instead, which I didn't even know existed.
Like I said, the code runs fine, and I can simply add # noinspection PyUnresolvedReferences and the warning goes away, but then the type inferencing and code hints on the variable Q don't work.
Another fix is to instead import _queue and use _queue.SimpleQueue, because apparently in python 3.7 queue.SimpleQueue is implemented in cython and is imported from a cython package _queue. But importing _queue seems hackish and implementation-dependent.
Is there a way to tell PyCharm that import queue means the actual lib/python3.7/queue.py as opposed to whatever helpers/typeshed/stdlib/3/queue.pyi is?
It was fixed in PyCharm 2019.3 https://youtrack.jetbrains.com/issue/PY-31437, could you please try to update?

Why is pprofile giving no output?

I'm new to python and I'd like to use pprofile, but I don't get it running. For example
#pprofile --threads 0 test.py
gives me the error
bash: pprofile: Command not found.
I've tried to run pprofiler as a module, like described here: https://github.com/vpelletier/pprofile , using the following script:
#!/usr/bin/env sc_python3
# coding=utf-8
import time
import pprofile
def someHotSpotCallable():
profiler = pprofile.Profile()
with profiler:
time.sleep(2)
time.sleep(1)
profiler.print_stats()
Running this script gives no output. Changing the script in the following way
#!/usr/bin/env sc_python3
# coding=utf-8
import time
import pprofile
def someHotSpotCallable():
profiler = pprofile.Profile()
with profiler:
time.sleep(2)
time.sleep(1)
profiler.print_stats()
print(someHotSpotCallable())
gives the Output
Total duration: 3.00326s
None
How du I get the line-by-line table-output, shown on https://github.com/vpelletier/pprofile?
I'm using Python 3.4.3, Version 2.7.3 is giving the same output (only Total duration) on my System.
Do I have to install anything?
Thanks a lot!
pprofile author here.
To use pprofile as a command, you would have to install it. The only packaging I have worked on so far is via pypi. Unless you are using a dependency-gathering tool (like buildout), the easiest is probably to setup a virtualenv and install pprofile inside it:
$path_to_your_virtualenv/bin/pip install pprofile
Besides this, there is nothing else to install: pprofile only depends on python interpreter features (more on this just below).
Then you can run it like:
$path_to_your_virtualenv/bin/pprofile <args>
Another way to run pprofile would be to fetch the source and run it as a python script rather than as a standa alone command:
$your_python_interpreter $path_to_pprofile/pprofile.py <args>
Then, about the surprising output: I notice your shebang mentions "sc_python3" as interpreter. What implementation of python interpreter is this ? Would you have some non-standard modules loaded on interpreter start ?
pprofile, in deterministic mode, depends on the interpreter triggering special events each time a line changes, each time a function is called or each time it returns, and, just for completeness, it also monitors when threads are created as the tracing function is a thread local. It looks like that interpreter does not trigger these events. A possible explanation would be that something else is competing with pprofile for these events: only one function can be registered at a time. For example code coverage tools and debuggers may use this function (or another closely related one in standard sys module, setprofile). Just for completeness, setprofile was insufficient for pprofile as it only triggers events on function call/return.
You may want to try the statistic profiling mode of pprofile at the expense of accuracy (but for an extreme reduction in profiler overhead), although there pprofile has to rely on another interpreter feature: the ability to list the call stack of all running threads, sadly expected to be less portable than other features of the standard the sys module.
All these work fine in CPython 2.x, CPython 3.x, pypy and (it has been contributed but I haven't tested it myself) IronPython.

Unable to import pexpect "windows only" methods

I have recently installed the pexpect 4.0 module, as it will be quite useful for the program I am creating. I do have windows, so I looked specifically at the exceptions for pexpect, knowing that the normal spawn and run methods aren't available to me. But, I cant get the "windows methods" that the module is supposed to show windows users, which are:
pexpect.popen_spawn.PopenSpawn, and pexpect.fdpexpect.fdspawn.
Does anyone have any idea how I can get these methods? I am running on windows 10, python 3.4.
Side note: I am currently working on trying to get winpexpect to import the spawn module from pexpect, but I am also failing at that as well.
This will do it for you:
from pexpect import popen_spawn, fdpexpect
Then you can do the rest of what you needed.
edit: this is the reason you did not see it, in the __init__.py notice this:
__all__ = ['ExceptionPexpect', 'EOF', 'TIMEOUT', 'spawn', 'spawnu', 'run','runu', 'which', 'split_command_line', '__version__', '__revision__']

Why does the Python runtime handle warnings this way?

Here's a traceback from a project I'm working on:
/usr/lib/python3/dist-packages/apport/report.py:13: PendingDeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp
Traceback (most recent call last):
...
File "./mouse16.py", line 1050, in _lit_string
rangeof = range(self.idx.v, self.idx.v + result.span()[1])
AttributeError: 'NoneType' object has no attribute 'span'
Now, there's a since-fixed bug in my code that caused the traceback itself; whatever.
I'm interested in the first line: the PendingDeprecationWarning for not-my-code. I use Ubuntu (as one can tell from apport's existence in the path), which is well-known for packaging and relying on Python for many things, notably things like package management and bug reporting (apport / ubuntu-bug).
imp is indeed deprecated: "Deprecated since version 3.4: The imp package is pending deprecation in favor of importlib.". My machine runs at least Python 3.4.3+ or better and it takes time and a lot of work to modernise and update software completely, so this warning is understandable.
But my program doesn't go anywhere near imp, importlib or apport, so my question is, why isn't a warning deriving from apport's source written to apport's logs or certainly collected by stderr on apport's parent process?
If I had to take a guess at this, it's because the devs decided to buffer -- but never flush nor write -- apport's stderr, and so the next time a python child process on the system opens stderr for writing (as an error in my program did), apport's buffered stderr is written too.
This isn't supported by what I (think I) know about Unix -- why would two separate Python instances interact in this way?
Upon request, here's the best I can do for an MCVE: a list of module-level imports.
import readline
import os
import sys
import warnings
import types
import typing
Is it because I import warnings? But... I still don't touch apport.
I think this question is more on-topic and will get better answers here on SO than AskUbuntu or Unix & Linux; flag it for migration if you feel strongly otherwise but I think the mods will agree with me.
The Apport docs state:
If ... e. g. a packaged Python application raises an uncaught exception, the apport backend is automatically invoked
The copy of Python distributed by Ubuntu has been specifically modified to do this. The exception-handling has been modified/hooked, and the code that gets called when you cause an exception is triggering this warning.
so my question is, why isn't a warning deriving from apport's source written to apport's logs or certainly collected by stderr on apport's parent process?
The apport python library isn't running in a separate process here. Sure the actual apport process is separate, but you are interacting/binding to it with library that is local to your code/process.
Since this Python library is using a deprecated module, that is running inside of your process, Python is correctly warning you.
As per Andrew's answer, the apport library is automatically invoked with an uncaught exception.

requests segfaults when embedded as wsgi, but not standalone

import sys
import os
import logging
# need to add environment to apache's path for includes
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "../../")))
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "../")))
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), ".")))
# likewise add cherrypy/other modules
sys.path.append(os.path.abspath("/Library/Python/2.7/site-packages/"))
import requests
response = requests.get('http://www.google.com').text
Using Python 2.7.6, requests 2.7.0, and Apache under MAMP 3.0, the above code crashes. A quick look through the code using winpdb seems to suggest that actually trying to open an internet connection is what is crashing the Python process. The Apache log is not very helpful, only saying
[notice] child pid 18879 exit signal Segmentation fault (11)
While my full code uses Cherrypy 3.8 to provide the WSGI portion of the framework, I feel that it is irrelevant to the problem at hand.
Is this some known problem with requests+apache, or is it some other problem? Python crashing without any comments makes it hard for me to even think of a way to start solving this issue.
EDIT: Using pdb, I found that the program segfaults on line 1421 of urllib.py in the python standard library.
proxy_settings = _get_proxy_settings()
where _get_proxy_settings comes from _scproxy.
I still have no idea how to fix this.
It seems that this is a platform-specific bug for my version of MacOS.

Categories

Resources