Here's a traceback from a project I'm working on:
/usr/lib/python3/dist-packages/apport/report.py:13: PendingDeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp
Traceback (most recent call last):
...
File "./mouse16.py", line 1050, in _lit_string
rangeof = range(self.idx.v, self.idx.v + result.span()[1])
AttributeError: 'NoneType' object has no attribute 'span'
Now, there's a since-fixed bug in my code that caused the traceback itself; whatever.
I'm interested in the first line: the PendingDeprecationWarning for not-my-code. I use Ubuntu (as one can tell from apport's existence in the path), which is well-known for packaging and relying on Python for many things, notably things like package management and bug reporting (apport / ubuntu-bug).
imp is indeed deprecated: "Deprecated since version 3.4: The imp package is pending deprecation in favor of importlib.". My machine runs at least Python 3.4.3+ or better and it takes time and a lot of work to modernise and update software completely, so this warning is understandable.
But my program doesn't go anywhere near imp, importlib or apport, so my question is, why isn't a warning deriving from apport's source written to apport's logs or certainly collected by stderr on apport's parent process?
If I had to take a guess at this, it's because the devs decided to buffer -- but never flush nor write -- apport's stderr, and so the next time a python child process on the system opens stderr for writing (as an error in my program did), apport's buffered stderr is written too.
This isn't supported by what I (think I) know about Unix -- why would two separate Python instances interact in this way?
Upon request, here's the best I can do for an MCVE: a list of module-level imports.
import readline
import os
import sys
import warnings
import types
import typing
Is it because I import warnings? But... I still don't touch apport.
I think this question is more on-topic and will get better answers here on SO than AskUbuntu or Unix & Linux; flag it for migration if you feel strongly otherwise but I think the mods will agree with me.
The Apport docs state:
If ... e. g. a packaged Python application raises an uncaught exception, the apport backend is automatically invoked
The copy of Python distributed by Ubuntu has been specifically modified to do this. The exception-handling has been modified/hooked, and the code that gets called when you cause an exception is triggering this warning.
so my question is, why isn't a warning deriving from apport's source written to apport's logs or certainly collected by stderr on apport's parent process?
The apport python library isn't running in a separate process here. Sure the actual apport process is separate, but you are interacting/binding to it with library that is local to your code/process.
Since this Python library is using a deprecated module, that is running inside of your process, Python is correctly warning you.
As per Andrew's answer, the apport library is automatically invoked with an uncaught exception.
Related
Using pycharm with python 3.7. I am using queue.SimpleQueue. The code runs fine, and PyCharm is pointed at the correct interpreter and all that. But with this code:
import queue
Q = queue.SimpleQueue()
I get a warning "Cannot find reference 'SimpleQueue' in 'queue.pyi'".
I do some searching. I hit ctrl-B on the "import queue" statement and it takes me to a file called queue.pyi in the folder helpers/typeshed/stdlib/3/ under the pycharm installation. So apparently instead of the queue.py file in lib/python3.7/ under the python venv, it thinks I'm trying to import this queue.pyi file instead, which I didn't even know existed.
Like I said, the code runs fine, and I can simply add # noinspection PyUnresolvedReferences and the warning goes away, but then the type inferencing and code hints on the variable Q don't work.
Another fix is to instead import _queue and use _queue.SimpleQueue, because apparently in python 3.7 queue.SimpleQueue is implemented in cython and is imported from a cython package _queue. But importing _queue seems hackish and implementation-dependent.
Is there a way to tell PyCharm that import queue means the actual lib/python3.7/queue.py as opposed to whatever helpers/typeshed/stdlib/3/queue.pyi is?
It was fixed in PyCharm 2019.3 https://youtrack.jetbrains.com/issue/PY-31437, could you please try to update?
Let's say my Python 3.6 script requires bar in the library foo, which it imports at the beginning:
from foo import bar
What I'd like to do is for the script to attempt the import, and give feedback to a downstream user if foo is not available on their system and that they should install it.
So far, I've managed to hack together this solution which is probably not very good:
try:
from foo import bar
except:
print("Need `foo` library installed")
exit(1)
I used print() because I hope this can be a direct message to the user, but not sure if that's a good idea in the context of exception handling?
Also, there are at least two more problems here:
The except clause would apply to any error that happens during import including cases where the error is something other than the absence of foo. So this probably isn't a good use of exception handling?
My script imports multiple libraries, and I have to create a chunk like the above for each one!
I briefly considered creating a for loop that goes through a list of library dependencies and import each one. E.g.:
list_of_libraries: list = ["foo", "lorem", "ipsum"]
for library in list_of_libraries:
try:
import library
except:
print("Need " + library + " library installed")
exit(1)
However, this also looks bad to me because:
Each loop iteration imports the whole library
It would fail because I don't think import takes a string?
It doesn't really solve problem 1 above.
Am I stupidly missing something here? What's a good way to implement this? Thank you.
EDIT: There are existing answers such as this one which discusses how to list dependencies in requirements.txt and installing them with pip. However, my question is focused on solutions I can implement inside my Python script to catch missing libraries and prompting the user to install them.
Exception Clause
You can catch ModuleNotFoundError (exists as of Python 3.6) or ImportError (for all Python versions) to restrict your except clause to cases of import troubles. ModuleNotFoundError is a subclass of ImportError, so catching ImportError will work in all versions of Python.
Error Output
Errors in standard linux applications would be written to stderr instead of stdout. So instead of using plain print, you would do as follows:
import sys
try:
from foo import bar
except ImportError:
print("Need `foo` library installed", file=sys.stderr)
exit(1)
You can test it by redirecting stdout to /dev/null. You will still see the error message:
$ python foobar.py >> /dev/null
Need `foo` library installed
Better Handling Possible?
As for a better method than writing each import individually or looping over the required libraries, I unfortunately do not know. I have seen the try except ImportError pattern been used for compatibility between modules that were renamed from Python 2 to Python 3. So for individual libraries it seems to be normal, but I have not seen anybody mass check imports.
I am using pytest for the first time, mostly it is great, but sometimes it just terminates without completing, with no error message.
=========================================================================================== test session starts ============================================================================================
platform darwin -- Python 3.5.1, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /Users/dacre/mike_tools/python, inifile:
collected 7 items
test_pipeline.py ......F%
While I was in the process of posting this question, I figured it out, but I am posting it here for anyone else with the same issue.
The issue is if your code calls close() on sys.stderr or sys.stdout by accident. This happened to me because I had a logging function that attempted to distinguish sys.stderr from a different filehandle by the .name attribute. py.test takes control of sys.stderr and remaps it to a _pytest.capture.EncodedFile object. My function thus misinterpreted it, and called close() on that object. This caused premature termination with no errors.
I just rewrote my code to avoid this problem, however an alternative option would be to import modules from sys, and then test for 'pytest' in modules:
from sys import modules
if 'pytest' in modules:
do something...
I have been playing with Gevent, and I like it a lot. However I have run into a problem. Breakpoint are not being hit, and debugging doesn't work (using both Visual Studio Python Tools and Eclipse PyDev). This happens after monkey.patch_all() is called.
This is a big problem for me, and unfortunately this is a blocker for the use of gevent. I have found a few threads that seem to indicate that gevent breaks debugging, but I would imagine there is a solution for that.
Does anyone know how to make debugging and breakpoints work with gevent and monkey patching?
PyCharm IDE solves the problem. It supports gevent code debugging after you set a configuration flag: http://blog.jetbrains.com/pycharm/2012/08/gevent-debug-support/.
Unfortunately, at the moment I don't know a free tool capable of debugging gevent.
UPD: THERE IS! Now there is a community version of PyCharm.
pdb - The Python Debugger
import pdb
pdb.set_trace() # Place this where you want to drop into the python interpreter.
While debugging in VS Code,
I was getting this error:
It seems that the gevent monkey-patching is being used. Please set an
environment variable with: GEVENT_SUPPORT=True to enable gevent
support in the debugger.
To do this, in the debug launch.json settings, I set the following:
"env": {
"GEVENT_SUPPORT": "True"
},
The simplest, most dangerous solution would be to monkey patch stdin and stdout:
import gevent.monkey
gevent.monkey.patch_all(sys=True)
def my_app():
# ... some code here
import pdb
pdb.set_trace()
# ... some more code here
my_app()
This is quite dangerous since you risk stdin/stdout behaving in a strange way
during the rest of your app lifetime.
Instead you can use a library that I wrote: gtools.pdb. It minimizes
the risk to the pdb prompt only:
def my_app():
# ... some code here
import gtools.pdb
gtools.pdb.set_trace()
# ... some more code here
my_app()
Basically, what it does is tell pdb to use a non-blocking stdin and stdout for
its interactive prompt. Any running greenlets will still continue to run
in the background.
If you want to avoid the dependency, all you need to do is tell pdb to use
a gevent friendly stdin and stdout with something like this:
import sys
from gevent.fileobject import FileObjectThread as File
def Pdb():
import pdb
return pdb.Pdb(stdin=File(sys.stdin), stdout=File(sys.stdout))
def my_app():
# ... some code here
Pdb().set_trace()
# ... some more code here
my_app()
Note that with any of these solutions, you loose Key-up, Key-down pdb prompt facilites see gevent issue patching stdin/stdout.
I use Pycharm 2.7.3 currently and I too was having issues with gevent 0.13.8 breaking debugging. However when I updated to gevent 1.0rc3 I found I could debug again properly.
Sidenote:
I only just now learned that Jetbrains had a workaround with the config flag. I was getting around the problem when I needed to debug with the following hack. I honestly don't know why it worked nor what the negative consequences were. I just did a little trial and error and this happened to allow debugging to work when using grequests.
# overrides the monkeypatch issue which causes debugging in PyDev to not work.
def patch_time():
return
import gevent.monkey
gevent.monkey.patch_time = patch_time
import grequests
I have a simple module and a basic def. Module name is example315.py and the def is
def right_justify(s)
print(s)
This works fine when I import example315 and then call example315.right_justify("hello world")
If I change my def to not return anything (in fact I can change it in any way) and then run the function again (AFTER saving my module of course) iit still does the print.
Short of exiting IDLE and starting over I can't seem to get it to work.
Any help appreciated
The module is loaded once per session, you have to re-load it when you change it.
From the Python tutorial on modules:
For efficiency reasons, each module is only imported once per
interpreter session. Therefore, if you change your modules, you must
restart the interpreter – or, if it’s just one module you want to test
interactively, use reload(), e.g. reload(modulename).
The problem you're facing is the fact that IDLE has already imported and built its internal representation of your module. Editing the file on disk won't reflect on the now imported memory-resident version in IDLE. You should be able to get the behavior you're looking for with:
example315 = reload(example315)
And here's some source: Python Docs Source