PyDev running pytest unit test with module-shared fixture fails - python

I have a problem running pytest unit tests with pyDev. I try to run a unit test with a module shared fixture and a finalizer which should be excecuted after the last test.
But when running the unit test in pyDev it does not use the same instance but instead creates two different instances. The example is running fine in the console or when started from a script within pydev.
I'm using platform Python 2.7.3, pytest-2.3.4, pyDev 2.7.3.2013031601, Eclipse 4.2 on Win7.
I tried the example from http://pytest.org/latest/fixture.html
The output from pydev is:
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.4
__________________________________ test_ehlo ___________________________________
smtp = <smtplib.SMTP instance at 0x027F9080>
__________________________________ test_noop ___________________________________
smtp = <smtplib.SMTP instance at 0x027FF3C8>
The console output is:
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.4
__________________________________ test_ehlo ___________________________________
smtp = <smtplib.SMTP instance at 0x01E51288>
__________________________________ test_noop ___________________________________
smtp = <smtplib.SMTP instance at 0x01E51288>
Which is the expected behaviour. What am I doing wrong??
the used code is conftest.py:
import pytest
import smtplib
#pytest.fixture(scope="module")
def smtp():
return smtplib.SMTP("merlinux.eu")
The test code in test_smtplib.py:
# content of test_module.py
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
assert "merlinux" in response[1]
assert 0 # for demo purposes
def test_noop(smtp):
response = smtp.noop()
assert response[0] == 250
assert 0 # for demo purposes
Running the test from script with:
import pytest,os
os.chdir("[path_to_tests]/tests") #your file location
pytest.main(['-s', 'test_smtplib.py'])
Any suggestions and thanks a lot for your help!

I don't have eclipse but I've been looking through the source code for Pydev and pytest. pytest doesn't use multiprocessing by default but it will if you have xdist installed. Perhaps you have it? Or perhaps Eclipse has installed it?
If you still have the system available, can you try setting the option below in your pytest parameters? It simply tells pytest to use one process when using xdist as documented here.
-n=1 or perhaps it will prefer -n 1
If that doesn't work, then this also shouldn't work but could you try it? Use the option below in the pytest options as before (not in the pydev test runner options) to enable module-level testing. It's a pydev test runner option so I guess it will cause an error, but maybe some other code that keys off the option will make use of it.
--split_jobs=module or again perhaps --split_jobs module

Seems like this is a long-standing bug on the pydev side. I just fixed it and submitted a pull request to Pydev, see https://github.com/fabioz/Pydev/pull/120 . In the meanwhile, you could probably take out the little change and apply to your installed version of pydev, enabling proper pydev/pytest runs with scoping.

Related

No tests collected by pytest

I have problem in importing pytest while writing a python code. "import pytest is grayed out.
Python is 3.8.3, Pycharm community edition.
pytest version 5.4.2, is successfully installed and can be seen in the project interpreter in pycharm. As well as I can see the installed path of pytest in python directory.
When running py.test command from console. It starts the test run shows "collected 0 items" and lastly ends with "NO TESTS RAN IN 0.05s"
If anyone running similar problems with some other packages kindly let me know.
TIA...
You simply run pytest from the commandline. There is no need to import pytest into a script. Take this Python script as an example:
def inc(x):
return x + 1
def test_answer():
assert inc(3) == 4
To run pytest on it, from the terminal (after changing to the right directory):
$ pytest
And you will then see the test outcome in the commandline as pytest automatically picks up the python scripts names test_*.py, where * is any name, e.g. test_increment.py. To have a test from your Python script run, name it with test_ as well to begin with.
Running pytest in the terminal is an option. In addition, Pycharm has integrated test suite for automatic discovery and collection of test tasks. You can use hotkey ctrl+shift+10 to run the test tasks directly in current file .

Pylint with sniffer not using updated source files

I am using Pylint and Nose along with sniffer to lint and test my python application on every save. This is sniffer in case you are unaware https://pypi.python.org/pypi/sniffer
Here is the runnable responsible for running nosetests and pylint from scent.py file for sniffer
from sniffer.api import * # import the really small API
from pylint import lint
from subprocess import call
import os
import nose
#runnable
def zimmer_tests(*args):
command = "nosetests some_module/test --nologcapture --with-coverage --cover-config-file=zimmer_coverage_config"
lint.Run(['some_module/app'], exit=False)
return call(command, shell=True) == 0
Here, first lint.Run() runs pylint on my app. Then nosetest is executed on the app using call()
The Problem is that after I save the file nosetests run on updated version of the file however Pylint uses the same old version. I have to restart sniffer every time for pylint to get new version of files.
I assume this is not a problem of sniffer's configuration since nosetests is able to get the new version of file every time. Still I am not sure.
My pylintrc file is almost the same we get from generate command pylint --generate-rcfile > .pylintrc with some minor application specific tweaks.
As pointed by #KlausD. in comments, the lint.Run() was using files from cache as it was being called from a still running process. Calling pylint from command line worked as expected.
Here is the modified code
#runnable
def zimmer_tests(*args):
command_nose = "nosetests some_module/test --nologcapture --with-coverage --cover-config-file=zimmer_coverage_config"
command_lint = "pylint some_module"
call(command_lint, shell=True)
return call(command_nose, shell=True) == 0

How to debug unittests with pudb debugger?

I am having some trouble trying to debug some unit tests through the pudb debugger.
The tests run fine with python, but I had no luck runnign them with pudb.
I isolated the problem, getting to the following sample code:
class Math:
def pow(self, x, y):
return x ** y
import unittest
class MathTest(unittest.TestCase):
def testPow23(self):
self.assertEquals(8, Math().pow(2, 3))
def testPow24(self):
self.assertEquals(16, Math().pow(2, 4))
if __name__ == '__main__':
unittest.main()
The tests run fine:
$ python amodule.py
.
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
But if running through pudb, it gives me the output:
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I've tried running using pudb amodule.py and also with python -m pudb.run amodule.py, but it makes no difference -- no tests are run in one or another way.
Should I be doing something different to debug unit tests using pudb?
Try placing a breakpoint on a useful line in your code:
from pudb import set_trace; set_trace()
The ways you tried to launch it might interfere with test discovery and/or not run your script with a __name__ of '__main__'.
Since this is a popular question, I feel I should also mention that most test running tools will require you to pass in a switch to prevent it from capturing the standard output and input (usually it's -s).
So, remember to run pytest -s when using Pytest, or nosetests -s for Nose, python manage.py test -s for Django tests, or check the documentation for your test running tool.
You can set a breakpoint even easier by:
import pudb; pu.db

Debugging pytest post mortem exceptions in pycharm/pydev

I would like to use the built in Pytest runner of PyCharm together with the debugger without pre-configuring breakpoints.
The problem is that exceptions in my test are caught by Pytest so PyCharm's post mortem debugger cannot handle the exception.
I know using a breakpoint works but I would prefer not to run my test twice.
Found a way to do this in Unittest, I would like to know if something like this exists in Pytest.
Is there a way to catch unittest exceptions with PyCharm?
Are you using pytest-pycharm plugin? Looks like it works for me. Create virtualenv, pip install pytest pytest-pycharm, use this virtualenv at PyCharm Edit configuration -> Python Interpreter and then run with Debug ... Example:
import pytest
def test_me():
assert None
if __name__ == '__main__':
pytest.main(args=[__file__])
PyCharm debugger stops at assert None point, with (<class '_pytest.assertion.reinterpret.AssertionError'>, AssertionError(u'assert None',), None)
EDIT
Set Preferences > Tools > Python Integration Tools > Default test runner to py.test. Then Run > Debug 'py.test in test_me.py'

PyCharm, Django: zero code coverage

PyCharm has a "Run with Coverage" action for Django test targets. This runs the tests, but shows zero test coverage (0% files, not covered in the project pane, and all red in the editor). Checking or unchecking "Use bundled coverage.py" makes no difference.
Running the same tests from the CLI gives the expected results:
$ coverage --version
Coverage.py, version 3.5.1. http://nedbatchelder.com/code/coverage
$ coverage run ./manage.py test blackbox
Creating test database for alias 'default'...
....
----------------------------------------------------------------------
Ran 4 tests in 0.002s
OK
Destroying test database for alias 'default'...
$ coverage report
Name Stmts Miss Cover
---------------------------------------------
__init__ 0 0 100%
blackbox/__init__ 0 0 100%
blackbox/models 5 0 100%
blackbox/rules/__init__ 1 0 100%
blackbox/rules/board 62 19 69%
blackbox/tests 49 6 88%
manage 11 4 64%
settings 24 0 100%
---------------------------------------------
TOTAL 152 29 81%
What could cause this?
If you access your project via any symlink in the path, coverage display will fail.
Try to open same project through real path, and you will get correct behavior.
https://youtrack.jetbrains.com/issue/PY-17616
PS: Refreshing old question, as bug still has not been fixed.
I had a similar issue using the PyCharm bundled coverage.py
The tests were running fine, but the coverage results were not loaded, "0%" or "not covered" everywhere.
There was an error logged in the PyCharm console though, following the output of the tests, that was coverage.py related:
/System/Library/Frameworks/Python.framework/Versions/2.6/bin/python
"/Applications/PyCharm 2.5 EAP.app/helpers/run_coverage.py"
run "--omit=/Applications/PyCharm 2.5 EAP.app/helpers" bin/test
Creating test database for alias 'default'...
................................
----------------------------------------------------------------------
Ran xx tests in xxs
OK
No source for code: '/path/file.py' (<- error)
Process finished with exit code 0
My solution was to run the bundled coverage.py with the option to ignore errors: "-i".
I've edit the bundled "run_coverage.py" file, you can see its location in the console output, and add the "-i" option in the last line:
main(["xml", "-o", coverage_file + ".xml"])
to:
main(["xml", "-i", "-o", coverage_file + ".xml"])
This worked for me, the error is ignored and all the coverage data are now loaded in the UI.
If using "-i" solve the issue on your side, a better solution would be to fix the errors, but until then, you'll see coverage results.
I've also been trying to solve this issue in Ubuntu.
At the moment I tried with both the apt-get Python and the Enthought Canopy stack, with no success. In Windows however it does work (using Canopy).
I've used the following code:
# in a.py
class A(object):
def p(self, a):
return a
# in test_a.py
from unittest import TestCase, main
from a import A
class TestA(TestCase):
def test_p(self):
inst = A()
val = inst.p("a")
self.assertEqual("a", val)
if _name_ == "__main__":
main()
I had a similar issue. I ended up generating xml data using nosetests --cover-xml, but you can also generate an xml from a previous coverage.py run with coverage xml
Then that report can be conveniently loaded in PyCharm/IDEA from the Analyze -> Show Coverage Data… -> + button and selecting the xml file.

Categories

Resources