How can I query the enable status of a plugin in nose? - python

I'm currently trying to see if a nose plugin is enabled from within my test harness. The specific thing I'm trying to do is propagate the enable status of the coverage module to subprocess executions. Essentially when --with-coverage is used, I want to execute the subprocesses under the coverage tool directly (or propagate a flag down).
Can this be done?

This is one of those cases where you need to work around nose. See Measuring subprocesses in the coverage documentation for ways to ensure that subprocesses automatically invoke coverage.

Related

Identifying which test case is running on pytest-xdist processes

I'm executing python unit tests in parallel with pytest-forked (pytest -r sxX -n auto --cache-clear --max-worker-restart=4 --forked) and there is one test case which takes quite some time and which is running at the end while the other test case runners/CPU cores are idle (because presumably there's only this one test cases left to complete).
I'd like to know which test case that is (to maybe run it at the beginning or disable it). Note, this is not a matter of finding the longest running test case as that may not be the culprit. I'm explicitly looking for some way of knowing which test case is assigned to a pytest runner Python process. Calling ps shows something like python -u -c import sys; exec(eval(sys.stdin.readline())) (for as many CPU cores in the machine) which isn't particularly helpful.
Is there a way to set the name of the test case to the process and retrieve it with system tools such as ps? I'm running those test cases on Linux, in case that's relevant.
Since pytest-dist 2.4, there's a solution to showing which test case is running. It requires an additional package setproctitle.
Identifying workers from the system environment
New in version 2.4
If the setproctitle package is installed, pytest-xdist will use it to update the process title (command line) on its workers to show their current state. The titles used are [pytest-xdist running] file.py/node::id and [pytest-xdist idle], visible in standard tools like ps and top on Linux, Mac OS X and BSD systems. For Windows, please follow setproctitle’s pointer regarding the Process Explorer tool.
This is intended purely as an UX enhancement, e.g. to track down issues with long-running or CPU intensive tests. Errors in changing the title are ignored silently. Please try not to rely on the title format or title changes in external scripts.
https://pypi.org/project/pytest-xdist/#identifying-workers-from-the-system-environmentbas
Here is a way to see which test is running when pytest-xdist is processing.
Link to docs: https://docs.pytest.org/en/7.1.x/reference/reference.html#_pytest.hookspec.pytest_report_teststatus
Add the following function to your conftest.py file.
#conftest.py
def pytest_report_teststatus(report):
print(report.__dict__['nodeid'])
Example command to start pytest:
python -m pytest -n 3 -s --show-capture=no --disable-pytest-warnings

Coverage testing with pytest-cov and subprocess.Popen

How to test an application with multiple processes, collecting coverage of them all, using Popen?
The pytest-cov documentation only covers the multiprocessing module, not subprocess.
https://pytest-cov.readthedocs.io/en/latest/subprocess-support.html
My application uses Popen to start new copies of itself. All children are SIGTERMed (which is handled so that they exit normally) and then waited for by their parents. However, the coverage reports show some lines of execution in the first child until it calls Popen (shown in red), and some lines in grandchildren. I suspect that coverage report files may get overwritten by the multiple processes. No simple test case, sorry.

Using pytest, is it possible for a unit test to know that it is being run with code coverage monitoring on?

I am currently developing some tests using python py.test / unittest that, via subprocess, invoke another python application (so that I can exercise the command line options, and confirm that the tool is installed correctly).
I would like to be able to run the tests in such a way that I can get a view of the code coverage metrics (using coverage.py) for the target application using pytest_cov. By default this does not work as the code coverage instrumentation does not apply to code invoked with subprocess.
Code Coverage of the code does work if I update the tests to directly invoke the entry class for the target application (rather than running via the command line).
Ideally I want to have a single set of code which can be run in two ways:
If code coverage monitoring is not enabled then use command line
Otherwise execute the main class of the target application.
Which leads to my question(s):
Is it possible for a python unit test to determine if it is being run with code coverage enabled?
Otherwise: is there any easy way to pass a command line flag from the pytest invocation that can be used to set the mode within the code.
Coverage.py has a facility to automatically measure coverage in sub-processes that are spawned: http://coverage.readthedocs.io/en/latest/subprocess.html
Coverage sets three environment flags when running tests: COV_CORE_SOURCE, COV_CORE_CONFIG and COV_CORE_DATAFILE.
So you can use a simple if-statement to verify whether the current test is being run with coverage enabled:
import os
if "COV_CORE_SOURCE" in os.environ:
# do what yo need to do when Coverage is enabled

Python coverage - skip or mock input method

Context
I have a python application that I'm unit testing. Half the application is working and I have a very high test accuracy.
The application requires one-time user input for installation purposes.
This means that, if you run the code, there has to be interaction with a user.
Problem
Coverage is a Python plugin for coverage reports. I use coverage with this command:
coverage run application.py
Coverage runs my application, goes through my tests, and delivers a coverage report.
The problem is that the command to run those tests, executes my application and I have to provide input. That's not that big of a deal, but I cannot do that on my CI server using Jenkins (or can I?).
Question
I want to run the coverage tool without user input. In my tests, the input function is mocked out. Running all my tests without coverage works fine. How can I prevent coverage from requiring user input?
You should probably have 2 different code paths, one for running the tests, and one for running the app:
coverage run tests.py
with tests.py importing application.py, mocking methods as necessary, then running the actual application.
Or you could allow user input via command line arguments:
coverage run application.py --user=input --other="etc."
Finally, if there truly are portions of your app that cannot be tested or reasonably mocked (it happens, say you're calling out into a third party exception tracking library/service that you can't load in your tests), you can instruct coverage to ignore those lines for the purposes of computing coverage, by adding # pragma: no cover at the end of the instruction that you won't be fully testing:
my = "code"
goes = "here"
if debug: # pragma: no cover
call_untestable(code=True)
this_portion(ignored_for_coverage=True)
covered_code = "yes, again!"
See more here:
http://coverage.readthedocs.io/en/coverage-4.2/excluding.html

Restart the process where the error has been detected

In a Django project, it is possible to create unit-tests to verify what we had done so far. The principle is simple. We have to execute the command python3 manage.py test in the shell. When an error is detected in the program, the shell will display it and stop the process. However, the procedure has a little gap. If we have several errors, we have to correct it and restart the whole process. This process could take several minutes which depends of our program. Is there a manner to restart the process where the error has been detected instead of restart the whole procedure?
EDIT :
In fact, another problem I have is to retain the databases instead of recreate it. How could I do such thing?
If you want to automatically run only failing tests you need to use a third party testing driver like Nose or create your own. But it's not worth it because ...
You can specify particular tests to run by supplying any number of
“test labels” to ./manage.py test. Each test label can be a full
Python dotted path to a package, module, TestCase subclass, or test
method. For instance:
# Run just one test method
$ ./manage.py test animals.tests.AnimalTestCase.test_animals_can_speak
Source: https://docs.djangoproject.com/en/1.10/topics/testing/overview/
This approach can be used to re run only the ones that have failed.
Please note that third party test runners will probably recreate the database every time you run the test - even for only the failing test. On the other hand the django default test runner has the --keep option which allows the database to be reused. For more details see: https://stackoverflow.com/a/37100979/267540

Categories

Resources