Python coverage - skip or mock input method - python

Context
I have a python application that I'm unit testing. Half the application is working and I have a very high test accuracy.
The application requires one-time user input for installation purposes.
This means that, if you run the code, there has to be interaction with a user.
Problem
Coverage is a Python plugin for coverage reports. I use coverage with this command:
coverage run application.py
Coverage runs my application, goes through my tests, and delivers a coverage report.
The problem is that the command to run those tests, executes my application and I have to provide input. That's not that big of a deal, but I cannot do that on my CI server using Jenkins (or can I?).
Question
I want to run the coverage tool without user input. In my tests, the input function is mocked out. Running all my tests without coverage works fine. How can I prevent coverage from requiring user input?

You should probably have 2 different code paths, one for running the tests, and one for running the app:
coverage run tests.py
with tests.py importing application.py, mocking methods as necessary, then running the actual application.
Or you could allow user input via command line arguments:
coverage run application.py --user=input --other="etc."
Finally, if there truly are portions of your app that cannot be tested or reasonably mocked (it happens, say you're calling out into a third party exception tracking library/service that you can't load in your tests), you can instruct coverage to ignore those lines for the purposes of computing coverage, by adding # pragma: no cover at the end of the instruction that you won't be fully testing:
my = "code"
goes = "here"
if debug: # pragma: no cover
call_untestable(code=True)
this_portion(ignored_for_coverage=True)
covered_code = "yes, again!"
See more here:
http://coverage.readthedocs.io/en/coverage-4.2/excluding.html

Related

PyCharm duplicated py.test tests assertions

Everything works fine with PyCharm and pytest, except if I have failing tests, then it duplicates the error output:
One of actual failures if the red one and other is white. This is really annoys, and I haven't found any way to disable such behaviour.
There is an option to disable log via py.test, however it will disable all logging.
Note: everything works as expected if I run python -m pytest test.py.
I think that is a feature not a bug. The top level is being emitted during the testing which allows you to review the failure before the testing is complete. The second copy of the results is the summary which effectively removes any of the text that was showing test progress.
You can easily view just part of the test output by clicking on the test hierarchy:
The duplicated output can be elimited by running pytest with the -q or --quiet parameter.
You can configure the parameter to be applied to all PyCharm pytest tests by setting it in Edit Configurations --> Templates --> Python Tests --> pytest --> Additional Arguments.
This will then apply those arguments to all new run configurations. If you have a bunch of existing test run configurations, deleting all of them and then re-generating them by running a test or tests using the gutter icon is the quickest way to reset the output.

Run single test using Python Unittest in Visual Studio Code Debug mode

I use the pylint extension in Visual Studio Code within the Debug module (see image) to run my tests using Python Unittest library.
Whenever I run my tests, I execute my test.py file in Debug and it runs all of the tests in the entire file. I have my tests broken up logically by classes with several tests per class.
To reduce the time it takes evaluate test results for a single test I am actively working on, is there a way to execute just that one test rather than wait for all the tests in the test.py file to execute while in VS Code Debug mode?
For example:
test.py
import unittest
class test_TestClass(unittest.TestCase):
def test_Test1(self):
x = 1
self.assertEqual(x, 1)
def test_Test2(self):
y = 2
self.assertEqual(y, 3)
If I want to only execute test_Test2() to make sure that it fails, how do I do that without running the entire file (i.e., test_Test1 and test_Test2)?
When your tests are discovered by Python: Run All Unit Tests (or clicking on the Run Tests button in the status bar and choosing Discover Unit Tests), a code lens is provided to run individual test functions, methods, and classes either normally or under the debugger.
Clicking on Debug Test will run open the debugger and run the unit test under it. Otherwise you can modify your launch.json to add a debug configuration and specify the arguments to your test runner to just run the test(s) you're interested in.

Using pytest, is it possible for a unit test to know that it is being run with code coverage monitoring on?

I am currently developing some tests using python py.test / unittest that, via subprocess, invoke another python application (so that I can exercise the command line options, and confirm that the tool is installed correctly).
I would like to be able to run the tests in such a way that I can get a view of the code coverage metrics (using coverage.py) for the target application using pytest_cov. By default this does not work as the code coverage instrumentation does not apply to code invoked with subprocess.
Code Coverage of the code does work if I update the tests to directly invoke the entry class for the target application (rather than running via the command line).
Ideally I want to have a single set of code which can be run in two ways:
If code coverage monitoring is not enabled then use command line
Otherwise execute the main class of the target application.
Which leads to my question(s):
Is it possible for a python unit test to determine if it is being run with code coverage enabled?
Otherwise: is there any easy way to pass a command line flag from the pytest invocation that can be used to set the mode within the code.
Coverage.py has a facility to automatically measure coverage in sub-processes that are spawned: http://coverage.readthedocs.io/en/latest/subprocess.html
Coverage sets three environment flags when running tests: COV_CORE_SOURCE, COV_CORE_CONFIG and COV_CORE_DATAFILE.
So you can use a simple if-statement to verify whether the current test is being run with coverage enabled:
import os
if "COV_CORE_SOURCE" in os.environ:
# do what yo need to do when Coverage is enabled

Restart the process where the error has been detected

In a Django project, it is possible to create unit-tests to verify what we had done so far. The principle is simple. We have to execute the command python3 manage.py test in the shell. When an error is detected in the program, the shell will display it and stop the process. However, the procedure has a little gap. If we have several errors, we have to correct it and restart the whole process. This process could take several minutes which depends of our program. Is there a manner to restart the process where the error has been detected instead of restart the whole procedure?
EDIT :
In fact, another problem I have is to retain the databases instead of recreate it. How could I do such thing?
If you want to automatically run only failing tests you need to use a third party testing driver like Nose or create your own. But it's not worth it because ...
You can specify particular tests to run by supplying any number of
“test labels” to ./manage.py test. Each test label can be a full
Python dotted path to a package, module, TestCase subclass, or test
method. For instance:
# Run just one test method
$ ./manage.py test animals.tests.AnimalTestCase.test_animals_can_speak
Source: https://docs.djangoproject.com/en/1.10/topics/testing/overview/
This approach can be used to re run only the ones that have failed.
Please note that third party test runners will probably recreate the database every time you run the test - even for only the failing test. On the other hand the django default test runner has the --keep option which allows the database to be reused. For more details see: https://stackoverflow.com/a/37100979/267540

Pyunit run tests and build report

I have a collection of tests under one file test_file.py. I can run it normally from the console like this:
python -m unittest test_file
This outputs a small traceback when a test case fails. So what I need to do exactly is.
Run peridically the tests, let's say on crontab (I know how to do this)
send an email report after every run, in order to do this I need to know if all tests went out alright, and in case some of them failed, which ones failed and what the error was, just like the normal pyunit output.
As I said above, I know how to do the cron part and I know how to run the tests, but what do I need or what can I do to accomplish item 2 ?
Maybe a script that manually runs every test and collects the results and then send the email ?
Thank you very much !
if you intend on building out tests in the future, you should consider Jenkins. http://jenkins-ci.org/content/about-jenkins-ci . it can run your tests on a CRON, report results (with an xUnit plugin) per build over time, and conditionally send out an email based on the test results.

Categories

Resources