How to customize the passed test message in a dynamic way - python

I might be guilty of using pytest in a way I'm not supposed to, but let's say I want to generate and display some short message on successful test completion. Like I'm developing a compression algorithm, and instead of "PASSED" I'd like to see "SQUEEZED BY 65.43%" or something like that. Is this even possible? Where should I start with the customization, or maybe there's a plugin I might use?
I've stumbled upon pytest-custom-report, but it provides only static messages, that are set up before tests run. That's not what I need.

I might be guilty of using pytest in a way I'm not supposed to
Not at all - this is exactly the kind of use cases the pytest plugin system is supposed to solve.
To answer your actual question: it's not clear where the percentage value comes from. Assuming it is returned by a function squeeze(), I would first store the percentage in the test, for example using the record_property fixture:
from mylib import squeeze
def test_spam(record_property):
value = squeeze()
record_property('x', value)
...
To display the stored percentage value, add a custom pytest_report_teststatus hookimpl in a conftest.py in your project or tests root directory:
# conftest.py
def pytest_report_teststatus(report, config):
if report.when == 'call' and report.passed:
percentage = dict(report.user_properties).get('x', float("nan"))
short_outcome = f'{percentage * 100}%'
long_outcome = f'SQUEEZED BY {percentage * 100}%'
return report.outcome, short_outcome, long_outcome
Now running test_spam in default output mode yields
test_spam.py 10.0% [100%]
Running in the verbose mode
test_spam.py::test_spam SQUEEZED BY 10.0% [100%]

Related

CANoe: How to select and start test cases from XML Test Module from Python using CANoe COM interface?

currently I am able to:
start CANoe application
load a CANoe configuration file
load a test setup file
def load_test_setup(self, canoe_test_setup_file: str = None) -> None:
logger.info(
f'Loading CANoe test setup file <{canoe_test_setup_file}>.')
if self.measurement.Running:
logger.info(
f'Simulation is currently running, so new test setup could \
not be loaded!')
return
self.test_setup.TestEnvironments.Add(canoe_test_setup_file)
test_environment = self.test_setup.TestEnvironments.Item(1)
logger.info(f'Loaded test environment is <{test_environment.Name}>.')
How can I access the XML Test Module loaded with the test setup (tse) file and select tests to be executed?
The last before line in your snippet is most probably causing the issue.
I have been trying to fix this issue for quite some time now and finally found the solution.
Somehow when you execute the line self.test_setup.TestEnvironments.Item(1)
win32com creates an object of type TestSetupItem which doesn't have the necessary properties or methods to access the test cases. Instead we want to access objects of collection types TestSetupFolders or TestModules. win32com creates object of TestSetupItem type even though I have a single XML Test Module (called AutomationTestSeq) in the Test Environment as you can see here.
There are three possible solutions that I found.
Manually clearing the generated cache before each run.
Using win32com.client.DispatchWithEvents or win32com.client.gencache.EnsureDispatch generates a bunch of python files that describe CANoe's object model.
If you had used either of those before, TestEnvironments.Item(1) will always return TestSetupItem instead of the more appropriate type objects.
To remove the cache you need to delete the C:\Users\{username}\AppData\Local\Temp\gen_py\{python version} folder.
Doing this every time is of course not very practical.
Force win32com to always use dynamic dispatch.
You can do this by using:
canoe = win32com.client.dynamic.Dispatch("CANoe.Application")
Any objects you create using canoe from now on, will be dynamically dispatched.
Forcing dynamic dispatch is easier than manually clearing the cache folder every time. This gave me good results always. But doing this will not let you have any insight into the objects. You won't be able to see the acceptable properties and methods for the objects.
Typecast TestSetupItem to TestSetupFolders or TestModules.
This has the risk that if you typecast incorrectly, you will get unexpected results. But has worked well for me so far.
In short: win32.CastTo(test_env, "ITestEnvironment2"). This will ensure that you are using the recommended object hierarchy as per CANoe technical reference.
Note that you will also have to typecast TestSequenceItem to TestCase to be able to access test case verdict and enable/disable test cases.
Below is a decent example script.
"""Execute XML Test Cases without a pass verdict"""
import sys
from time import sleep
import win32com.client as win32
CANoe = win32.DispatchWithEvents("CANoe.Application")
CANoe.Open("canoe.cfg")
test_env = CANoe.Configuration.TestSetup.TestEnvironments.Item('Test Environment')
# Cast required since test_env is originally of type <ITestEnvironment>
test_env = win32.CastTo(test_env, "ITestEnvironment2")
# Get the XML TestModule (type <TSTestModule>) in the test setup
test_module = test_env.TestModules.Item('AutomationTestSeq')
# {.Sequence} property returns a collection of <TestCases> or <TestGroup>
# or <TestSequenceItem> which is more generic
seq = test_module.Sequence
for i in range(1, seq.Count+1):
# Cast from <ITestSequenceItem> to <ITestCase> to access {.Verdict}
# and the {.Enabled} property
tc = win32.CastTo(seq.Item(i), "ITestCase")
if tc.Verdict != 1: # Verdict 1 is pass
tc.Enabled = True
print(f"Enabling Test Case {tc.Ident} with verdict {tc.Verdict}")
else:
tc.Enabled = False
print(f"Disabling Test Case {tc.Ident} since it has already passed")
CANoe.Measurement.Start()
sleep(5) # Sleep because measurement start is not instantaneous
test_module.Start()
sleep(1)
Just continue what you have done.
The TestEnvironment contains the TestModules. Each TestModule contains a TestSequence which in turn contains the TestCases.
Keep in mind that you cannot individual TestCases but only the TestModule. But you can enable and disable individual TestCases before execution by using the COM-API.
(typing this from the top of my head, might not work 100%)
test_module = test_environment.TestModules.Item(1) # of 2 or whatever
test_sequence = test_module.Sequence
for i in range(1, test_sequence.Count + 1):
test_case = test_sequence.Item(i)
if ...:
test_case.Enabled = False # or True
test_module.Start()
You have to keep in mind that a TestSequence can also contain other TestSequences (i.e. a TestGroup). This depends on how your TestModule is setup. If so, you have to take care of that in your loop and descend into these TestGroups while searching for your TestCase of interest.

write pytest test function return value to file with pytest.hookimpl

I am looking for a way to access the return value of a test function in order to include that value in a test report file (similar to http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures).
Code example that I would like to use:
# modified example code from http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures
import pytest
import os.path
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
if rep.when == "call" and rep.passed:
mode = "a" if os.path.exists("return_values") else "w"
with open("return_values.txt", mode) as f:
# THE FOLLOWING LINE IS THE ONE I CANNOT FIGURE OUT
# HOW DO I ACCESS THE TEST FUNCTION RETURN VALUE?
return_value = item.return_value
f.write(rep.nodeid + ' returned ' + str(return_value) + "\n")
I expect the return value to be written to the file "return_values.txt". Instead, I get an AttributeError.
Background (in case you can recommend a totally different approach):
I have a Python library for data analysis on a given problem. I have a standard set of test data which I routinely run my analysis to produce various "benchmark" metrics on the quality of the analysis algorithms on. For example, one such metric is the trace of a normalized confusion matrix produced by the analysis code (which I would like to be as close to 1 as possible). Another metric is the CPU time to produce an analysis result.
I am looking for a nice way to include these benchmark results into a CI framework (currently Jenkins), such that it becomes easy to see whether a commit improves or degrades the analysis performance. Since I am already running pytest in the CI sequence, and since I would like to use various features of pytest for my benchmarks (fixtures, marks, skipping, cleanup) I thought about simply adding a post-processing hook in pytest (see http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures) that collects test function run time and return values and reports them (or only those which are marked as benchmarks) into a file, which will be collected and archived as a test artifact by my CI framework.
I am open to other ways to solve this problem, but my google search conclusion is that pytest is the framework closest to already providing what I need.
Sharing the same problem, here is a different solution i came up with:
using the fixture record_property in the test:
def test_mytest(record_property):
record_property("key", 42)
and then in conftest.py we can use the pytest_runtest_teardown hook:
#conftest.py
def pytest_runtest_teardown(item, nextitem):
results = dict(item.user_properties)
if not results:
return
with open(f'{item.name}_return_values.txt','a') as f:
for key, value in results.items():
f.write(f'{key} = {value}\n')
and then the content of test_mytest_return_values.txt:
key = 42
Two important notes:
This code will be executed even if the test failed. I couldn't find a way to get the outcome of the test.
This can be combined with heofling's answer using results = dict(item.user_properties) to obtain the keys and values that were added in the test instead of adding a dict to config and then access it in the test.
pytest ignores test functions return value, as can be seen in the code:
#hookimpl(trylast=True)
def pytest_pyfunc_call(pyfuncitem):
testfunction = pyfuncitem.obj
...
testfunction(**testargs)
return True
You can, however, store anything you need in the test function; I usually use the config object for that. Example: put the following snippet in your conftest.py:
import pathlib
import pytest
def pytest_configure(config):
# create the dict to store custom data
config._test_results = dict()
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
if rep.when == "call" and rep.passed:
# get the custom data
return_value = item.config._test_results.get(item.nodeid, None)
# write to file
report = pathlib.Path('return_values.txt')
with report.open('a') as f:
f.write(rep.nodeid + ' returned ' + str(return_value) + "\n")
Now store the data in tests:
def test_fizz(request):
request.config._test_results[request.node.nodeid] = 'mydata'

How to run the python unittest N number of times

I have a python unittest like below , I want to run this whole Test N Number of times
class Test(TestCase)
def test_0(self):
.........
.........
.........
Test.Run(name=__name__)
Any Suggestions?
You can use parameterized tests. There are different modules to do that. I use nose to run my unittests (more powerful than the default unittest module) and there's a package called nose-parameterized that allows you to write a factory test and run it a number of times with different values for variables you want.
If you don't want to use nose, there are several other options for running parameterized tests.
Alternatively you can execute any number of test conditions in a single test (as soon as one fails the test will report error). In your particular case, maybe this makes more sense than parameterized tests, because in reality it's only one test, only that it needs a large number of runs of the function to get to some level of confidence that it's working properly. So you can do:
import random
class Test(TestCase)
def test_myfunc(self):
for _ in range(100):
input = random.random()
self.assertEquals(input, input + 2)
Test.Run(name=__name__)
Why because... the test_0 method contains a random option.. so each time it runs it selects random number of configuration and tests against those configurations. so I am not testing the same thing multiple times..
Randomness in a test makes it non-reproducible. One day you might get 1 failure out of 100, and when you run it again, it’s already gone.
Use a modern testing tool to parametrize your test with a sequential number, then use random.seed to have a random but reproducible test case for each number in a sequence.
portusato suggests nose, but pytest is a more modern and popular tool:
import random, pytest
#pytest.mark.parametrize('i', range(100))
def test_random(i):
orig_state = random.getstate()
try:
random.seed(i)
data = generate_random_data()
assert my_algorithm(data) == works
finally:
random.setstate(orig_state)
pytest.mark.parametrize “explodes” your single test_random into 100 individual tests — test_random[0] through test_random[99]:
$ pytest -q test.py
....................................................................................................
100 passed in 0.14 seconds
Each of these tests generates different, random, but reproducible input data to your algorithm. If test_random[56] fails, it will fail every time, so you will then be able to debug it.
If you don't want your test to stop after the first failure, you can use subTest.
class Test(TestCase):
def test_0(self):
for i in [1, 2, 3]:
with self.subTest(i=i):
self.assertEqual(squared(i), i**2)
Docs

py.test unit testing over a fixed range of parameter values with failure records

I've written the following type of unit tests for a tool:
param_one = np.random.randint(100, 1000)
param_two = np.random.randint(20, 200)
data1 = generate_random_data(param_one, param_two)
data2 = generate_random_data(param_one, param_two)
def test_one(data1, data2):
assert something
def test_two(data1, data2):
assert something
I know the tests can fail for a certain combination of these parameters. so I would like to repeat the py.test for a specified ranges of the two parameters, and record which combinations of these parameters are failing.
Even better would be
if I could save the random data under test when a certain test fails and
repeat each test (for each combination of these parameters 10 times) and record the frequency of success/failure.
How can I achieve this under py.test or unittest? Thanks a lot.
I looked up documentation on py.test website and some previous answers here, but the terms are all confusing and it is not easy to follow them.
Obviously, I could do this outside the testing framework, I need this in the unit testing mechanism so I can setup continuous integration better.

Python: Using logging info in nose/unittest?

I have a test function that manipulates the internal state of an object. The object logs the following using logging.info().
INFO:root:_change: test light red
INFO:root:_change: test light green
INFO:root:_change: test light yellow
How can I incorporate this into a nose or unittest function so that I can have a test similar to this?
def test_thing():
expected_log_output = "INFO:root:_change: test light red\n" +\
"INFO:root:_change: test light green\n" +\
"INFO:root:_change: test light yellow\n"
run_thing()
assert actual_log_output matches expected_log_output
When it comes to testing my logging, what I usually do is mock out my logger and ensure it is called with the appropriate params. I typically do something like this:
class TestBackupInstantiation(TestCase):
#patch('core.backup.log')
def test_exception_raised_when_instantiating_class(self, m_log):
with self.assertRaises(IOError) as exc:
Backup(AFakeFactory())
assert_equal(m_log.error.call_count, 1)
assert_that(exc.exception, is_(IOError))
So you can even make a call where you can test to ensure what the logger is called with to validate the message.
I believe you can do something like:
m_log.error.assert_called_with("foo")
I might also add, when it comes to this kind of testing I love using test frameworks like flexmock and mock
Also, when it comes to validating matchers, py-hamcrest is awesome.

Categories

Resources