Restart the process where the error has been detected - python

In a Django project, it is possible to create unit-tests to verify what we had done so far. The principle is simple. We have to execute the command python3 manage.py test in the shell. When an error is detected in the program, the shell will display it and stop the process. However, the procedure has a little gap. If we have several errors, we have to correct it and restart the whole process. This process could take several minutes which depends of our program. Is there a manner to restart the process where the error has been detected instead of restart the whole procedure?
EDIT :
In fact, another problem I have is to retain the databases instead of recreate it. How could I do such thing?

If you want to automatically run only failing tests you need to use a third party testing driver like Nose or create your own. But it's not worth it because ...
You can specify particular tests to run by supplying any number of
“test labels” to ./manage.py test. Each test label can be a full
Python dotted path to a package, module, TestCase subclass, or test
method. For instance:
# Run just one test method
$ ./manage.py test animals.tests.AnimalTestCase.test_animals_can_speak
Source: https://docs.djangoproject.com/en/1.10/topics/testing/overview/
This approach can be used to re run only the ones that have failed.
Please note that third party test runners will probably recreate the database every time you run the test - even for only the failing test. On the other hand the django default test runner has the --keep option which allows the database to be reused. For more details see: https://stackoverflow.com/a/37100979/267540

Related

Checking that PYTHONIOENCODING is always "utf8"

I know unittests and use write them daily.
They get executed during development and CI.
Now I have test which I would like to ensure on the production system:
PYTHONIOENCODING must be "utf8"
Above I used the verb "test", this means I want to check the state. This question is not about how to do this.
AFAIK the unittest framework can't help me here, since it only gets executed during development and CI.
How to solve this in the python world withou re-inventing the wheel?
Above is only an example. There are several other things next to PYTHONIOENCODING which I would like to check.
Next use case for these checks: Some days ago we had an issue on the production sever. The command-line tool convert gets used and some versions are broken and create wrong results. I would like to write a simple check to ensure that the convert tool on the production server is not broken.
Straightforward approach (Checking)
Put this near the start of the code:
import os
if os.environ.get('PYTHONIOENCODING', '').lower() not in {'utf-8', 'utf8'}:
raise EnvironmentError("Environment variable $PYTHONIOENCODING must be set to 'utf8'")
Alternative solution (Ensuring)
In one of the projects I code for, there's a "startup script", so instead of running python3 main.py, we run this in production:
bash main.sh
whose content is rather simple:
#!/bin/bash
export PYTHONIOENCODING=utf8
exec /usr/bin/env python3 main.py
testinfra
If you want to write and run tests against the deployment infrastructure, you can use the testinfra plugin for pytest. For example, test for a simple requirement of validating an environment variable on target machine could look like:
def test_env_var(host):
assert host.run_expect((0,), 'test "$PYTHONIOENCODING" == "utf8"')
This infrastructure test suite can be developed in a separate project and invoked before the actual deployment takes place (for example, we invoke the infra tests right after the docker image is built; if the tests fail, the image is not uploaded to our private image repository/deployed to prod etc).

Python coverage - skip or mock input method

Context
I have a python application that I'm unit testing. Half the application is working and I have a very high test accuracy.
The application requires one-time user input for installation purposes.
This means that, if you run the code, there has to be interaction with a user.
Problem
Coverage is a Python plugin for coverage reports. I use coverage with this command:
coverage run application.py
Coverage runs my application, goes through my tests, and delivers a coverage report.
The problem is that the command to run those tests, executes my application and I have to provide input. That's not that big of a deal, but I cannot do that on my CI server using Jenkins (or can I?).
Question
I want to run the coverage tool without user input. In my tests, the input function is mocked out. Running all my tests without coverage works fine. How can I prevent coverage from requiring user input?
You should probably have 2 different code paths, one for running the tests, and one for running the app:
coverage run tests.py
with tests.py importing application.py, mocking methods as necessary, then running the actual application.
Or you could allow user input via command line arguments:
coverage run application.py --user=input --other="etc."
Finally, if there truly are portions of your app that cannot be tested or reasonably mocked (it happens, say you're calling out into a third party exception tracking library/service that you can't load in your tests), you can instruct coverage to ignore those lines for the purposes of computing coverage, by adding # pragma: no cover at the end of the instruction that you won't be fully testing:
my = "code"
goes = "here"
if debug: # pragma: no cover
call_untestable(code=True)
this_portion(ignored_for_coverage=True)
covered_code = "yes, again!"
See more here:
http://coverage.readthedocs.io/en/coverage-4.2/excluding.html

How to run a particular test in PyCharm

I have 60 functional tests in one file. I wrote them in Notepad++ and used py.test as the test framework. Today I decided to swap Notepad++ with PyCharm. I opened my file of functional tests in PyCharm and ran the tests from PyCharm, as you can see in the picture:
Now, after confirming that I could run all of the tests, I tried to run an individual test, for example test_login_with_extantUser_using_email. Logically, I right-clicked on the test, expecting a "run test" button or something similar to appear. But no such thing appeared. In fact, it appears that there is no way to run an individual test by simply right-clicking on it.
So my question is, how can I run an individual test? Must I set up a configuration for each one in the Edit Configurations menu? That would take a very long time, considering that I have 60 tests.
I have found the problem. It's a bug in Pycharm that needs to be fixed. I will explain the bug.
Right-clicking an individual test will not display the option to run the test if the test is not a member of a class that inherits from unittest.TestCase. This is true even if you are not using unittest, as in my case, in which I am using py.test.
When I made my py.test test classes inherit from unittest.TestCase, I got the option to run tests individually when I right-clicked on a test.
I have reported the bug to Pycharm. Time will tell if they fix it.
https://youtrack.jetbrains.com/issue/PY-26754
Base on pt.test doc you can specify particular test_case for running with -k (which means keywords obviously). In your case it should be like pytest -k "test_login_with_extantUser_using_email" if you running in CLI.
If you want to run particular test with Pycharm, you have to fill 'Keywords' field in Run/Debug Configurations.
Best Regards.

Idea run/debug py.test single test not the whole suite

I'm creating a python test suite (using py.test). I'm coding the tests in Idea and I don't know how to debug a single test.
This is my setting of the debugger. It runs the whole testsuite. So I have to run all the tests before it gets to the one I'm trying to debug.
In your configuration, set:
Target to the relative path of one of your test files, i.e. testsuite/psa/test_psa_integration.py
Keywords to a keyword that identifies the test you are trying to run specifically. If tests are part of a class, Keywords should be something like: TestPsaIntegration and test_psa_integration_example
I don't use IntelliJ, but in PyCharm, you can easily debug tests without going through this tedious process of adding a Run/Debug configuration each time.
To do this with PyCharm, go to:
Preferences (or Settings) > Tools > Python Integrated Tools and set Default test runner to py.test.
Then, back in your file (i.e. test_psa_integration.py), you could just right-click anywhere within the code of a test, and select either Run 'py.test in ...' or Debug 'py.test in...' which will automatically create a new Run/Debug configuration as explained previously.
An alternative is adding --no-cov --capture=no into Additional Arguments. To make this automatic for other test file, add those into the Template part.

Pyunit run tests and build report

I have a collection of tests under one file test_file.py. I can run it normally from the console like this:
python -m unittest test_file
This outputs a small traceback when a test case fails. So what I need to do exactly is.
Run peridically the tests, let's say on crontab (I know how to do this)
send an email report after every run, in order to do this I need to know if all tests went out alright, and in case some of them failed, which ones failed and what the error was, just like the normal pyunit output.
As I said above, I know how to do the cron part and I know how to run the tests, but what do I need or what can I do to accomplish item 2 ?
Maybe a script that manually runs every test and collects the results and then send the email ?
Thank you very much !
if you intend on building out tests in the future, you should consider Jenkins. http://jenkins-ci.org/content/about-jenkins-ci . it can run your tests on a CRON, report results (with an xUnit plugin) per build over time, and conditionally send out an email based on the test results.

Categories

Resources