Solving Empty Suite in selenium webdriver - python

Here's my code, but when I run my code, it displayed this error:
C:\Program Files\Python310\python.exe" "C:\Program Files\JetBrains\PyCharm Community Edition 2022.1\plugins\python-ce\helpers\pycharm\_jb_unittest_runner.py" --target Logintests
Testing started at 1:55 PM ...
Launching unittests with arguments python -m unittest Logintests in E:\$LEARN$\$QA$\Automation\page-object-python-selenium-CMSpanel\Tests
Ran 0 tests in 0.000s
OK
Process finished with exit code 0
Empty suite
In addition to search in stackoverflow, I found some resolve way, for example:
In unittest library exists a naming convention that all the tests start with 'test', this informs the test runner about which methods represent tests.
You have to change your 'Test' method to 'test'
but I didn't understand how to edit my code or my directories.

Related

No tests collected by pytest

I have problem in importing pytest while writing a python code. "import pytest is grayed out.
Python is 3.8.3, Pycharm community edition.
pytest version 5.4.2, is successfully installed and can be seen in the project interpreter in pycharm. As well as I can see the installed path of pytest in python directory.
When running py.test command from console. It starts the test run shows "collected 0 items" and lastly ends with "NO TESTS RAN IN 0.05s"
If anyone running similar problems with some other packages kindly let me know.
TIA...
You simply run pytest from the commandline. There is no need to import pytest into a script. Take this Python script as an example:
def inc(x):
return x + 1
def test_answer():
assert inc(3) == 4
To run pytest on it, from the terminal (after changing to the right directory):
$ pytest
And you will then see the test outcome in the commandline as pytest automatically picks up the python scripts names test_*.py, where * is any name, e.g. test_increment.py. To have a test from your Python script run, name it with test_ as well to begin with.
Running pytest in the terminal is an option. In addition, Pycharm has integrated test suite for automatic discovery and collection of test tasks. You can use hotkey ctrl+shift+10 to run the test tasks directly in current file .

Pylint with sniffer not using updated source files

I am using Pylint and Nose along with sniffer to lint and test my python application on every save. This is sniffer in case you are unaware https://pypi.python.org/pypi/sniffer
Here is the runnable responsible for running nosetests and pylint from scent.py file for sniffer
from sniffer.api import * # import the really small API
from pylint import lint
from subprocess import call
import os
import nose
#runnable
def zimmer_tests(*args):
command = "nosetests some_module/test --nologcapture --with-coverage --cover-config-file=zimmer_coverage_config"
lint.Run(['some_module/app'], exit=False)
return call(command, shell=True) == 0
Here, first lint.Run() runs pylint on my app. Then nosetest is executed on the app using call()
The Problem is that after I save the file nosetests run on updated version of the file however Pylint uses the same old version. I have to restart sniffer every time for pylint to get new version of files.
I assume this is not a problem of sniffer's configuration since nosetests is able to get the new version of file every time. Still I am not sure.
My pylintrc file is almost the same we get from generate command pylint --generate-rcfile > .pylintrc with some minor application specific tweaks.
As pointed by #KlausD. in comments, the lint.Run() was using files from cache as it was being called from a still running process. Calling pylint from command line worked as expected.
Here is the modified code
#runnable
def zimmer_tests(*args):
command_nose = "nosetests some_module/test --nologcapture --with-coverage --cover-config-file=zimmer_coverage_config"
command_lint = "pylint some_module"
call(command_lint, shell=True)
return call(command_nose, shell=True) == 0

How to debug unittests with pudb debugger?

I am having some trouble trying to debug some unit tests through the pudb debugger.
The tests run fine with python, but I had no luck runnign them with pudb.
I isolated the problem, getting to the following sample code:
class Math:
def pow(self, x, y):
return x ** y
import unittest
class MathTest(unittest.TestCase):
def testPow23(self):
self.assertEquals(8, Math().pow(2, 3))
def testPow24(self):
self.assertEquals(16, Math().pow(2, 4))
if __name__ == '__main__':
unittest.main()
The tests run fine:
$ python amodule.py
.
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
But if running through pudb, it gives me the output:
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I've tried running using pudb amodule.py and also with python -m pudb.run amodule.py, but it makes no difference -- no tests are run in one or another way.
Should I be doing something different to debug unit tests using pudb?
Try placing a breakpoint on a useful line in your code:
from pudb import set_trace; set_trace()
The ways you tried to launch it might interfere with test discovery and/or not run your script with a __name__ of '__main__'.
Since this is a popular question, I feel I should also mention that most test running tools will require you to pass in a switch to prevent it from capturing the standard output and input (usually it's -s).
So, remember to run pytest -s when using Pytest, or nosetests -s for Nose, python manage.py test -s for Django tests, or check the documentation for your test running tool.
You can set a breakpoint even easier by:
import pudb; pu.db

PyCharm, Django: zero code coverage

PyCharm has a "Run with Coverage" action for Django test targets. This runs the tests, but shows zero test coverage (0% files, not covered in the project pane, and all red in the editor). Checking or unchecking "Use bundled coverage.py" makes no difference.
Running the same tests from the CLI gives the expected results:
$ coverage --version
Coverage.py, version 3.5.1. http://nedbatchelder.com/code/coverage
$ coverage run ./manage.py test blackbox
Creating test database for alias 'default'...
....
----------------------------------------------------------------------
Ran 4 tests in 0.002s
OK
Destroying test database for alias 'default'...
$ coverage report
Name Stmts Miss Cover
---------------------------------------------
__init__ 0 0 100%
blackbox/__init__ 0 0 100%
blackbox/models 5 0 100%
blackbox/rules/__init__ 1 0 100%
blackbox/rules/board 62 19 69%
blackbox/tests 49 6 88%
manage 11 4 64%
settings 24 0 100%
---------------------------------------------
TOTAL 152 29 81%
What could cause this?
If you access your project via any symlink in the path, coverage display will fail.
Try to open same project through real path, and you will get correct behavior.
https://youtrack.jetbrains.com/issue/PY-17616
PS: Refreshing old question, as bug still has not been fixed.
I had a similar issue using the PyCharm bundled coverage.py
The tests were running fine, but the coverage results were not loaded, "0%" or "not covered" everywhere.
There was an error logged in the PyCharm console though, following the output of the tests, that was coverage.py related:
/System/Library/Frameworks/Python.framework/Versions/2.6/bin/python
"/Applications/PyCharm 2.5 EAP.app/helpers/run_coverage.py"
run "--omit=/Applications/PyCharm 2.5 EAP.app/helpers" bin/test
Creating test database for alias 'default'...
................................
----------------------------------------------------------------------
Ran xx tests in xxs
OK
No source for code: '/path/file.py' (<- error)
Process finished with exit code 0
My solution was to run the bundled coverage.py with the option to ignore errors: "-i".
I've edit the bundled "run_coverage.py" file, you can see its location in the console output, and add the "-i" option in the last line:
main(["xml", "-o", coverage_file + ".xml"])
to:
main(["xml", "-i", "-o", coverage_file + ".xml"])
This worked for me, the error is ignored and all the coverage data are now loaded in the UI.
If using "-i" solve the issue on your side, a better solution would be to fix the errors, but until then, you'll see coverage results.
I've also been trying to solve this issue in Ubuntu.
At the moment I tried with both the apt-get Python and the Enthought Canopy stack, with no success. In Windows however it does work (using Canopy).
I've used the following code:
# in a.py
class A(object):
def p(self, a):
return a
# in test_a.py
from unittest import TestCase, main
from a import A
class TestA(TestCase):
def test_p(self):
inst = A()
val = inst.p("a")
self.assertEqual("a", val)
if _name_ == "__main__":
main()
I had a similar issue. I ended up generating xml data using nosetests --cover-xml, but you can also generate an xml from a previous coverage.py run with coverage xml
Then that report can be conveniently loaded in PyCharm/IDEA from the Analyze -> Show Coverage Data… -> + button and selecting the xml file.

Python: Conditional variables based on whether nosetest is running

I'm running nosetests which have a setup function that needs to load a different database than the production database. The ORM I'm using is peewee which requires that the database for a model is set in the definition.
So I need to set a conditional variable but I don't know what condition to use in order to check if nosetest is running the file.
I read on Stack Overflow that you can check for nose in sys.modules but I was wondering if there is a more exact way to check if nose is running.
Perhaps examining sys.argv[0] to see what command is running?
Examining sys.argv might work, but you can execute nose either with nosetests or python -m nose, which obviously will give you a different result.
I think the more robust way is to inspect the stack and see if the code is being called through a package called nose.
Example code:
import inspect
import unittest
def is_called_by_nose():
stack = inspect.stack()
return any(x[0].f_globals['__name__'].startswith('nose.') for x in stack)
class TestFoo(unittest.TestCase):
def test_foo(self):
self.assertTrue(is_called_by_nose())
Example usage:
$ python -m nose test_caller
.
----------------------------------------------------------------------
Ran 1 test in 0.009s
OK
$ nosetests test_caller
.
----------------------------------------------------------------------
Ran 1 test in 0.009s
OK
$ python -m unittest test_caller
F
======================================================================
FAIL: test_foo (test_caller.TestFoo)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_caller.py", line 14, in test_foo
self.assertTrue(is_called_by_nose())
AssertionError: False is not true
----------------------------------------------------------------------
Ran 1 test in 0.004s
FAILED (failures=1)

Categories

Resources