I am using pytest for the first time, mostly it is great, but sometimes it just terminates without completing, with no error message.
=========================================================================================== test session starts ============================================================================================
platform darwin -- Python 3.5.1, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /Users/dacre/mike_tools/python, inifile:
collected 7 items
test_pipeline.py ......F%
While I was in the process of posting this question, I figured it out, but I am posting it here for anyone else with the same issue.
The issue is if your code calls close() on sys.stderr or sys.stdout by accident. This happened to me because I had a logging function that attempted to distinguish sys.stderr from a different filehandle by the .name attribute. py.test takes control of sys.stderr and remaps it to a _pytest.capture.EncodedFile object. My function thus misinterpreted it, and called close() on that object. This caused premature termination with no errors.
I just rewrote my code to avoid this problem, however an alternative option would be to import modules from sys, and then test for 'pytest' in modules:
from sys import modules
if 'pytest' in modules:
do something...
Related
I am in the process of deploying to Pypi a Python project, let's call it foobar. I would like to distribute it with a shell command and an IPython magic command. I use Poetry, and the relevant part of my .toml configuration file is:
[tool.poetry.scripts]
foobar = 'foobar.cli:main'
foobar_magic = 'foobar.magic:load_ipython_extension'
After uploading this to TestPypi and installing it with pip, the shell command (foobar) works as expected. However, executing %load_ext foobar_magic in a Jupyter Notebook fails with:
ModuleNotFoundError: No module named 'foobar_magic'
According to the documentation:
You can put your extension modules anywhere you want, as long as they can be imported by Python’s standard import mechanism.
Under the same notebook, I have verified that !foobar and import foobar both work. How can I make foobar_magic be found too?
Moreover, although I'm not there yet, I guess the suffix of the entry point is wrong too. Indeed, the function I specify after the : will be called with no arguments, but the function load_ipython_extension() expects an IPython instance.
So I feel completely lost, and can't find any relevant documentation for deploying IPython Notebook extensions.
Edit 1. %load_ext foobar.magic unexpectedly works, and the magic %foobar does not complain about the arguments. I don't understand why, and why it is %foobar and not %foobar_magic as declared.
Edit 2. the foobar_magic = ... stuff is ignored or useless. Suppressing it has no consequence on %load_ext foobar.magic. I think the latter invocation might be ok. But it's a little annoying not to understand what's going on.
I finally found a workaround:
Delete the line foobar_magic = ... of my .toml.
Move the contents of foobar/magic.py to foobar/__init__.py (originally empty), guarded with the following two lines:
import sys
if "ipykernel" in sys.modules:
# magic stuff
This file being executed each time the module is imported, it is now enough to do (under a notebook):
%load_ext foobar
The guard ensures the magic stuff is executed if and only if foobar is imported from IPython.
This does not answer my original question, and I still do not fully understand how these entry points are supposed to work, but I am happy with the actual result.
I am trying to run a GUI test using pytest and pywinauto. When I run the code normally, it does not complain.
However, when I am doing it via pytest, it throws a bunch of errors:
Windows fatal exception: code 0x8001010d
Note that the code still executes without problems and the cases are marked as passed. It is just that the output is polluted with these weird Windows exceptions.
What is the reason for this. Should I be concerned?
def test_01():
app = Application(backend='uia')
app.start(PATH_TO_MY_APP)
main = app.window(title_re="MY_APP")
main.wait('visible', timeout=8) # error occurs here
time.sleep(0.5)
win_title = f"MY_APP - New Project"
assert win_title.upper() == main.texts()[0].upper() # error occurs here
This is an effect of a change introduced with pytest 5.0.0. From the release notes:
#5440: The faulthandler standard library module is now enabled by default to help users diagnose crashes in C modules.
This functionality was provided by integrating the external pytest-faulthandler plugin into the core, so users should remove that plugin from their requirements if used.
For more information see the docs: https://docs.pytest.org/en/stable/usage.html#fault-handler
You can mute these errors as follows:
pytest -p no:faulthandler
I had the same problem with Python 3.7.7 32-bit and pytest 5.x.x. It was solved by downgrading pytest to v.4.0.0:
python -m pip install pytest==4.0
Perhaps all Python versions are not compatible with the newest pytest version(s).
My workaround for now is to install pytest==4.6.11
With 5.0.0 the problem occurs the first time.
W10 box.
Have this problem using pytest 6.2.5 and pytest-qt 4.0.2.
I tried np8's idea: still got a horrible crash (without message).
I tried Felix Zumstein's idea: still got a horrible crash (without message).
Per this thread it appears the issue (in 'Doze) is a crap DLL.
What's strange is that pytest-qt and the qtbot fixture seem to work very well... until I get to this one test. So I have concluded that I have done something too complicated in terms of mocking and patching for this crap 'Doze DLL to cope with.
For example, I mocked out two methods on a QMainWindow subclass which is created at the start of the test. But removing these mocks did not solve the problem.
I have so far spent about 2 hours trying to understand what specific feature of this test is so problematic. I am in fact trying to verify the functioning of a method on my main window class which "manufactures" menu items (QWidgets.QAction) based on about 4 parameters.
At this stage I basically have no idea what this "problem feature" is, but it might be the business of inspecting and examining the returned QAction object.
I'm new to python and I'd like to use pprofile, but I don't get it running. For example
#pprofile --threads 0 test.py
gives me the error
bash: pprofile: Command not found.
I've tried to run pprofiler as a module, like described here: https://github.com/vpelletier/pprofile , using the following script:
#!/usr/bin/env sc_python3
# coding=utf-8
import time
import pprofile
def someHotSpotCallable():
profiler = pprofile.Profile()
with profiler:
time.sleep(2)
time.sleep(1)
profiler.print_stats()
Running this script gives no output. Changing the script in the following way
#!/usr/bin/env sc_python3
# coding=utf-8
import time
import pprofile
def someHotSpotCallable():
profiler = pprofile.Profile()
with profiler:
time.sleep(2)
time.sleep(1)
profiler.print_stats()
print(someHotSpotCallable())
gives the Output
Total duration: 3.00326s
None
How du I get the line-by-line table-output, shown on https://github.com/vpelletier/pprofile?
I'm using Python 3.4.3, Version 2.7.3 is giving the same output (only Total duration) on my System.
Do I have to install anything?
Thanks a lot!
pprofile author here.
To use pprofile as a command, you would have to install it. The only packaging I have worked on so far is via pypi. Unless you are using a dependency-gathering tool (like buildout), the easiest is probably to setup a virtualenv and install pprofile inside it:
$path_to_your_virtualenv/bin/pip install pprofile
Besides this, there is nothing else to install: pprofile only depends on python interpreter features (more on this just below).
Then you can run it like:
$path_to_your_virtualenv/bin/pprofile <args>
Another way to run pprofile would be to fetch the source and run it as a python script rather than as a standa alone command:
$your_python_interpreter $path_to_pprofile/pprofile.py <args>
Then, about the surprising output: I notice your shebang mentions "sc_python3" as interpreter. What implementation of python interpreter is this ? Would you have some non-standard modules loaded on interpreter start ?
pprofile, in deterministic mode, depends on the interpreter triggering special events each time a line changes, each time a function is called or each time it returns, and, just for completeness, it also monitors when threads are created as the tracing function is a thread local. It looks like that interpreter does not trigger these events. A possible explanation would be that something else is competing with pprofile for these events: only one function can be registered at a time. For example code coverage tools and debuggers may use this function (or another closely related one in standard sys module, setprofile). Just for completeness, setprofile was insufficient for pprofile as it only triggers events on function call/return.
You may want to try the statistic profiling mode of pprofile at the expense of accuracy (but for an extreme reduction in profiler overhead), although there pprofile has to rely on another interpreter feature: the ability to list the call stack of all running threads, sadly expected to be less portable than other features of the standard the sys module.
All these work fine in CPython 2.x, CPython 3.x, pypy and (it has been contributed but I haven't tested it myself) IronPython.
Here's a traceback from a project I'm working on:
/usr/lib/python3/dist-packages/apport/report.py:13: PendingDeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp
Traceback (most recent call last):
...
File "./mouse16.py", line 1050, in _lit_string
rangeof = range(self.idx.v, self.idx.v + result.span()[1])
AttributeError: 'NoneType' object has no attribute 'span'
Now, there's a since-fixed bug in my code that caused the traceback itself; whatever.
I'm interested in the first line: the PendingDeprecationWarning for not-my-code. I use Ubuntu (as one can tell from apport's existence in the path), which is well-known for packaging and relying on Python for many things, notably things like package management and bug reporting (apport / ubuntu-bug).
imp is indeed deprecated: "Deprecated since version 3.4: The imp package is pending deprecation in favor of importlib.". My machine runs at least Python 3.4.3+ or better and it takes time and a lot of work to modernise and update software completely, so this warning is understandable.
But my program doesn't go anywhere near imp, importlib or apport, so my question is, why isn't a warning deriving from apport's source written to apport's logs or certainly collected by stderr on apport's parent process?
If I had to take a guess at this, it's because the devs decided to buffer -- but never flush nor write -- apport's stderr, and so the next time a python child process on the system opens stderr for writing (as an error in my program did), apport's buffered stderr is written too.
This isn't supported by what I (think I) know about Unix -- why would two separate Python instances interact in this way?
Upon request, here's the best I can do for an MCVE: a list of module-level imports.
import readline
import os
import sys
import warnings
import types
import typing
Is it because I import warnings? But... I still don't touch apport.
I think this question is more on-topic and will get better answers here on SO than AskUbuntu or Unix & Linux; flag it for migration if you feel strongly otherwise but I think the mods will agree with me.
The Apport docs state:
If ... e. g. a packaged Python application raises an uncaught exception, the apport backend is automatically invoked
The copy of Python distributed by Ubuntu has been specifically modified to do this. The exception-handling has been modified/hooked, and the code that gets called when you cause an exception is triggering this warning.
so my question is, why isn't a warning deriving from apport's source written to apport's logs or certainly collected by stderr on apport's parent process?
The apport python library isn't running in a separate process here. Sure the actual apport process is separate, but you are interacting/binding to it with library that is local to your code/process.
Since this Python library is using a deprecated module, that is running inside of your process, Python is correctly warning you.
As per Andrew's answer, the apport library is automatically invoked with an uncaught exception.
I have been playing with Gevent, and I like it a lot. However I have run into a problem. Breakpoint are not being hit, and debugging doesn't work (using both Visual Studio Python Tools and Eclipse PyDev). This happens after monkey.patch_all() is called.
This is a big problem for me, and unfortunately this is a blocker for the use of gevent. I have found a few threads that seem to indicate that gevent breaks debugging, but I would imagine there is a solution for that.
Does anyone know how to make debugging and breakpoints work with gevent and monkey patching?
PyCharm IDE solves the problem. It supports gevent code debugging after you set a configuration flag: http://blog.jetbrains.com/pycharm/2012/08/gevent-debug-support/.
Unfortunately, at the moment I don't know a free tool capable of debugging gevent.
UPD: THERE IS! Now there is a community version of PyCharm.
pdb - The Python Debugger
import pdb
pdb.set_trace() # Place this where you want to drop into the python interpreter.
While debugging in VS Code,
I was getting this error:
It seems that the gevent monkey-patching is being used. Please set an
environment variable with: GEVENT_SUPPORT=True to enable gevent
support in the debugger.
To do this, in the debug launch.json settings, I set the following:
"env": {
"GEVENT_SUPPORT": "True"
},
The simplest, most dangerous solution would be to monkey patch stdin and stdout:
import gevent.monkey
gevent.monkey.patch_all(sys=True)
def my_app():
# ... some code here
import pdb
pdb.set_trace()
# ... some more code here
my_app()
This is quite dangerous since you risk stdin/stdout behaving in a strange way
during the rest of your app lifetime.
Instead you can use a library that I wrote: gtools.pdb. It minimizes
the risk to the pdb prompt only:
def my_app():
# ... some code here
import gtools.pdb
gtools.pdb.set_trace()
# ... some more code here
my_app()
Basically, what it does is tell pdb to use a non-blocking stdin and stdout for
its interactive prompt. Any running greenlets will still continue to run
in the background.
If you want to avoid the dependency, all you need to do is tell pdb to use
a gevent friendly stdin and stdout with something like this:
import sys
from gevent.fileobject import FileObjectThread as File
def Pdb():
import pdb
return pdb.Pdb(stdin=File(sys.stdin), stdout=File(sys.stdout))
def my_app():
# ... some code here
Pdb().set_trace()
# ... some more code here
my_app()
Note that with any of these solutions, you loose Key-up, Key-down pdb prompt facilites see gevent issue patching stdin/stdout.
I use Pycharm 2.7.3 currently and I too was having issues with gevent 0.13.8 breaking debugging. However when I updated to gevent 1.0rc3 I found I could debug again properly.
Sidenote:
I only just now learned that Jetbrains had a workaround with the config flag. I was getting around the problem when I needed to debug with the following hack. I honestly don't know why it worked nor what the negative consequences were. I just did a little trial and error and this happened to allow debugging to work when using grequests.
# overrides the monkeypatch issue which causes debugging in PyDev to not work.
def patch_time():
return
import gevent.monkey
gevent.monkey.patch_time = patch_time
import grequests