How to run the python unittest N number of times - python

I have a python unittest like below , I want to run this whole Test N Number of times
class Test(TestCase)
def test_0(self):
.........
.........
.........
Test.Run(name=__name__)
Any Suggestions?

You can use parameterized tests. There are different modules to do that. I use nose to run my unittests (more powerful than the default unittest module) and there's a package called nose-parameterized that allows you to write a factory test and run it a number of times with different values for variables you want.
If you don't want to use nose, there are several other options for running parameterized tests.
Alternatively you can execute any number of test conditions in a single test (as soon as one fails the test will report error). In your particular case, maybe this makes more sense than parameterized tests, because in reality it's only one test, only that it needs a large number of runs of the function to get to some level of confidence that it's working properly. So you can do:
import random
class Test(TestCase)
def test_myfunc(self):
for _ in range(100):
input = random.random()
self.assertEquals(input, input + 2)
Test.Run(name=__name__)

Why because... the test_0 method contains a random option.. so each time it runs it selects random number of configuration and tests against those configurations. so I am not testing the same thing multiple times..
Randomness in a test makes it non-reproducible. One day you might get 1 failure out of 100, and when you run it again, it’s already gone.
Use a modern testing tool to parametrize your test with a sequential number, then use random.seed to have a random but reproducible test case for each number in a sequence.
portusato suggests nose, but pytest is a more modern and popular tool:
import random, pytest
#pytest.mark.parametrize('i', range(100))
def test_random(i):
orig_state = random.getstate()
try:
random.seed(i)
data = generate_random_data()
assert my_algorithm(data) == works
finally:
random.setstate(orig_state)
pytest.mark.parametrize “explodes” your single test_random into 100 individual tests — test_random[0] through test_random[99]:
$ pytest -q test.py
....................................................................................................
100 passed in 0.14 seconds
Each of these tests generates different, random, but reproducible input data to your algorithm. If test_random[56] fails, it will fail every time, so you will then be able to debug it.

If you don't want your test to stop after the first failure, you can use subTest.
class Test(TestCase):
def test_0(self):
for i in [1, 2, 3]:
with self.subTest(i=i):
self.assertEqual(squared(i), i**2)
Docs

Related

How to customize the passed test message in a dynamic way

I might be guilty of using pytest in a way I'm not supposed to, but let's say I want to generate and display some short message on successful test completion. Like I'm developing a compression algorithm, and instead of "PASSED" I'd like to see "SQUEEZED BY 65.43%" or something like that. Is this even possible? Where should I start with the customization, or maybe there's a plugin I might use?
I've stumbled upon pytest-custom-report, but it provides only static messages, that are set up before tests run. That's not what I need.
I might be guilty of using pytest in a way I'm not supposed to
Not at all - this is exactly the kind of use cases the pytest plugin system is supposed to solve.
To answer your actual question: it's not clear where the percentage value comes from. Assuming it is returned by a function squeeze(), I would first store the percentage in the test, for example using the record_property fixture:
from mylib import squeeze
def test_spam(record_property):
value = squeeze()
record_property('x', value)
...
To display the stored percentage value, add a custom pytest_report_teststatus hookimpl in a conftest.py in your project or tests root directory:
# conftest.py
def pytest_report_teststatus(report, config):
if report.when == 'call' and report.passed:
percentage = dict(report.user_properties).get('x', float("nan"))
short_outcome = f'{percentage * 100}%'
long_outcome = f'SQUEEZED BY {percentage * 100}%'
return report.outcome, short_outcome, long_outcome
Now running test_spam in default output mode yields
test_spam.py 10.0% [100%]
Running in the verbose mode
test_spam.py::test_spam SQUEEZED BY 10.0% [100%]

Seeding the random generator for tests

I made it work using factory-boy's get_random_state/set_random_state, although it wasn't easy. And the biggest downside is that the values are big. So the thing that comes to mind is to write it to a file. But then if I accidentally run the tests not telling it to seed from the file, the value is lost. Now that I think about it, I can display the value too (think tee). But still I'd like to reduce it to 4-5 digits.
My idea is as follows. Normally when you run tests it somewhere says, "seed: 4215." Then to reproduce the same result I've got to do SEED=4215 ./manage.py test or something.
I did some experiments with factory-boy, but then I realized that I can't achieve this even with the random module itself. I tried different ideas. All of them failed so far. The simplest is this:
import random
import os
if os.getenv('A'):
random.seed(os.getenv('A'))
else:
seed = random.randint(0, 1000)
random.seed(seed)
print('seed: {}'.format(seed))
print(random.random())
print(random.random())
/app $ A= python a.py
seed: 62
0.9279915658776743
0.17302689004804395
/app $ A=62 python a.py
0.461603098412836
0.7402019819205794
Why do the results differ? And how to make them equal?
Currently your types are different:
if os.getenv('A'):
random.seed(os.getenv('A'))
else:
seed = random.randint(0, 1000)
random.seed(seed)
print('seed: {}'.format(seed))
In the first case, you have a str and in the second an int. You can fix this by casting an int in the first case:
random.seed(int(os.getenv("A")))
I'm also not entirely following your need to seed random directly; I think with Factory Boy you can use factory.random.reseed_random (source).

How to collect values from a parametrized test function in a single pytest-xdist run

Using pytest-xdist I run a parametrized test function that has a variable I am interested in. I would like to store a value of the variable in a separate object and use that object later on to compose a test report.
I tried to use a session scope fixture that was supposed to run once right after all tests had finished and the resulting object had become available. At first glance, it did work fine, yet I found out that pytest-xdist does not have a builtin support for ensuring a session-scoped fixture is executed exactly once. This means that my storing object was most probably overwritten from scratch N times, where N is a number of pytest-xdist workers that shared my test. And this is not desirable behavior.
A short snippet to illustrate the question
# test_x.py
import pytest
from typing import Tuple
from typing import List
#pytest.mark.parametrize('x', list(range(4)))
def test_x(x):
result = (str(x), x ** 2) # <-- here I have a dummy result, say a tuple
def write_report(results: List[Tuple[str, int]]):
# here I would like to have ALL the results that is `results` expected to be:
# [('0', 0), ('1', 1), ('2', 4), ('3', 9)]
# considering the fact test_x was paralleled by pytest-xdist to N > 1 workers.
...
I run it with pytest -n auto test_x.py
Is there another way to collect all result values in such sequentional multiprocessing test invocation? I will appreciate any help on the matter.
Edited
Yesterday I found a promising package pytest_harvest, but haven't made it work yet. Everything goes nice as long as no pytest-xdist involved. The wonderful results_bag fixture works as expected storing the value in need and returning it in the session results hook pytest_sessionfinish when it all stops. But when you add xdist workers suddenly no session results at all (it does return an empty dict though).
# conftest.py
from pytest_harvest import is_main_process
from pytest_harvest import get_session_results_dct
def pytest_sessionfinish(session):
dct = get_session_results_dct(session)
if is_main_process(session):
print(dct) # prints an empty OrderedDict...
# test_parallel.py
import pytest
#pytest.mark.parametrize('x', list(range(3)))
def test_parallel(x, results_bag):
results_bag.x = x
pytest -sv test_parallel.py -> OK
pytest -sv -n auto test_parallel.py -> an empty OrderedDict
Any thoughts how to make it work decently?

improving performance of behave tests

We run behave BDD tests in our pipeline. We run the tests in the docker container as part of the jenkins pipeline. Currently it takes ~10 minutes to run all the tests. We are adding a lot of tests and in few months, it might go upto 30 minutes. It is outputting a lot of information. I believe that if I reduce the amount of information it outputs, I can get the tests to run faster. Is there a way to control the amount of information behave outputs? I want to print the information only if something fails.
I took a look at behave-parallel. Looks like it is in python 2.7. We are in python3.
I was looking at various options behave provides.
behave -verbose=false folderName (I assumed that it will not output all the steps)
behave --logging-level=ERROR TQXYQ (I assumed it will print only if there is an error)
behave --logging-filter="Test Step" TQXYQ (I assumed it will print only the tests that has "Test Step" in it)
None of the above worked.
The current output looks like this
Scenario Outline: IsError is populated correctly based on Test Id -- #1.7 # TestName/Test.feature:187
Given the test file folder is set to /TestName/steps/ # common/common_steps.py:22 0.000s
And Service is running # common/common_steps.py:10 0.000s
Given request used is current.json # common/common_steps.py:26 0.000s
And request is modified to set X to q of type str # common/common_steps.py:111 0.000s
And request is modified to set Y to null of type str # common/common_steps.py:111 0.000s
And request is modified to set Z to USD of type str # common/common_steps.py:111 0.000s
When make a modification request # common/common_steps.py:37 0.203s
Then it returns 200 status code # common/common_steps.py:47 0.000s
And transformed result has IsError with 0 of type int # common/common_steps.py:92 0.000s
And transformed result has ErrorMessages contain [] # common/common_steps.py:52 0.000s
I want to print only all these things only if there is an error. If everything is passing, I don't want to display this information.
I think the default log level INFO will not impact the performance of your tests.
I am also using docker container to run the regression suite and it takes about 2 hours to run 2300 test scenarios. It took nearly a day before and here is what I did :
1. Run all test suite parallel.
This is the most important reason that will reduce the execution time.
We spent a lot of efforts to turn the regression suite to be parallel-able.
- make atomic, autonomous and independent tests so that you can run all your tests in parallel effectively.
- create a parallel runner to run tests on multiple processes. I am using multiprocessing and subprocessing libraries to do this.
I would not recommend behave-parallel because it is no longer active supported.
You can refer to this link :
http://blog.crevise.com/2018/02/executing-parallel-tests-using-behave.html?m=1
- using Docker Swarm to add more nodes into Selenium Grid.
You can scale up to add more nodes and the maximum numbers of nodes depend on the number of cpus. The best practice is number of node = number of cpu.
I have 4 PCs , each has 4 cores so I can scale up to 1 hub and 15 nodes.
2. Optimize synchronization in your framework.
Remove time.sleep()
Remove implicitly wait. Use explicitly wait instead.
Hope it helps.
Well I have solved this in a traditional way, but I m not sure how effective it could be. I just started this yesterday and now trying to work on building the reports out of it. Approach as below, suggestions welcome
this solves the parallel execution at the example driven as well.
parallel_behave.py
Run command (mimics all the params of behave command)
py parallel_behave.py -t -d -f ......
initial_command = 'behave -d -t <tags>'
'''
the above command returns the eligible cases. may not be the right approach, but works well for me
'''
r = subprocess.Popen(initial_command.split(' '), stdout=subprocess.PIPE, bufsize=0)
finalsclist = []
_tmpstr=''
for out in r.stdout:
out = out.decode('utf-8')
# print(out.decode('utf-8'))
if shellout.startswith('[') :
_tmpstr+=out
if shellout.startswith('{') :
_tmpstr+=out
if shellout.startswith(']'):
_tmpstr+=out
break
scenarionamedt = json.loads(_tmpstr)
for sc in scenarionamedt:
[finalsclist.append(s['name']) for s in sc['elements']]
now the finalsclist contains the scenario name
ts = int(timestamp.timestamp)
def foo:
cmd = "behave -n '{}' -o ./report/output{}.json".format(scenarioname,ts)
pool = Pool(<derive based on the power of the processor>)
pool.map(foo, finalsclist)
this will create that many processes of individual behave calls and generate the json output under report folder
*** there was a reference from https://github.com/hugeinc/behave-parallel but this is at the feature level. I just extended it to the scenarios and example ****

py.test unit testing over a fixed range of parameter values with failure records

I've written the following type of unit tests for a tool:
param_one = np.random.randint(100, 1000)
param_two = np.random.randint(20, 200)
data1 = generate_random_data(param_one, param_two)
data2 = generate_random_data(param_one, param_two)
def test_one(data1, data2):
assert something
def test_two(data1, data2):
assert something
I know the tests can fail for a certain combination of these parameters. so I would like to repeat the py.test for a specified ranges of the two parameters, and record which combinations of these parameters are failing.
Even better would be
if I could save the random data under test when a certain test fails and
repeat each test (for each combination of these parameters 10 times) and record the frequency of success/failure.
How can I achieve this under py.test or unittest? Thanks a lot.
I looked up documentation on py.test website and some previous answers here, but the terms are all confusing and it is not easy to follow them.
Obviously, I could do this outside the testing framework, I need this in the unit testing mechanism so I can setup continuous integration better.

Categories

Resources