I am writing some Python unit tests using the "unittest" framework and run them in PyCharm. Some of the tests compare a long generated string to a reference value read from a file. If this comparison fails, I would like to see the diff of the two compared strings using PyCharms diff viewer.
So the the code is like this:
actual = open("actual.csv").read()
expected = pkg_resources.resource_string('my_package', 'expected.csv').decode('utf8')
self.assertMultiLineEqual(actual, expected)
And PyCharm nicely identifies the test as a failure and provides a link in the results window to click which opens the diff viewer. However, due to how unittest shortens the results, I get results such as this in the diff viewer:
Left side:
'time[57 chars]ercent
0;1;1;1;1;1;1;1
0;2;1;3;4;2;3;1
0;3;[110 chars]32
'
Right side:
'time[57 chars]ercen
0;1;1;1;1;1;1;1
0;2;1;3;4;2;3;1
0;3;2[109 chars]32
'
Now, I would like to get rid of all the [X chars] parts and just see the whole file(s) and the actual diff fully visualized by PyCharm.
I tried to look into unittest code but could not find a configuration option to print full results. There are some variables such as maxDiff and _diffThreshold but they have no impact on this print.
Also, I tried to run this in py.test but there the support in PyCharm was even less (no links even to failed test).
Is there some trick using the difflib with unittest or maybe some other tricks with another Python test framework to do this?
The TestCase.maxDiff=None answers given in many places only make sure that the diff shown in the unittest output is of full length. In order to also get the full diff in the <Click to see difference> link you have to set MAX_LENGTH.
import unittest
# Show full diff in unittest
unittest.util._MAX_LENGTH=2000
Source: https://stackoverflow.com/a/23617918/1878199
Well, I managed to hack myself around this for my test purposes. Instead of using the assertEqual method from unittest, I wrote my own and use that inside the unittest test cases. On failure, it gives me the full texts and the PyCharm diff viewer also shows the full diff correctly.
My assert statement is in a module of its own (t_assert.py), and looks like this
def equal(expected, actual):
msg = "'"+actual+"' != '"+expected+"'"
assert expected == actual, msg
In my test I then call it like this
def test_example(self):
actual = open("actual.csv").read()
expected = pkg_resources.resource_string('my_package', 'expected.csv').decode('utf8')
t_assert.equal(expected, actual)
#self.assertEqual(expected, actual)
Seems to work so far..
A related problem here is that unittest.TestCase.assertMultiLineEqual is implemented with difflib.ndiff(). This generates really big diffs that contain all shared content along with the differences. If you monkey patch to use difflib.unified_diff() instead, you get much smaller diffs that are less often truncated. This often avoids the need to set maxDiff.
import unittest
from unittest.case import _common_shorten_repr
import difflib
def assertMultiLineEqual(self, first, second, msg=None):
"""Assert that two multi-line strings are equal."""
self.assertIsInstance(first, str, 'First argument is not a string')
self.assertIsInstance(second, str, 'Second argument is not a string')
if first != second:
firstlines = first.splitlines(keepends=True)
secondlines = second.splitlines(keepends=True)
if len(firstlines) == 1 and first.strip('\r\n') == first:
firstlines = [first + '\n']
secondlines = [second + '\n']
standardMsg = '%s != %s' % _common_shorten_repr(first, second)
diff = '\n' + ''.join(difflib.unified_diff(firstlines, secondlines))
standardMsg = self._truncateMessage(standardMsg, diff)
self.fail(self._formatMessage(msg, standardMsg))
unittest.TestCase.assertMultiLineEqual = assertMultiLineEqual
Related
I'm the author of pythonizer and I'm trying to convert perl-style function templates to python. When I generate what I think is the equivalent code, the value of the loop variable is the last value instead of the value that it was when the function template comes into existence. Any ideas on code to capture the proper loop variable values? For example:
# test function templates per the perlref documentation
use Carp::Assert;
sub _colors {
return qw(red blue green yellow orange purple white black);
}
for my $name (_colors()) {
no strict 'refs';
*$name = sub { "<FONT COLOR='$name'>#_</FONT>" };
}
assert(red("careful") eq "<FONT COLOR='red'>careful</FONT>");
assert(green("light") eq "<FONT COLOR='green'>light</FONT>");
print "$0 - test passed!\n";
Gets translated to:
#!/usr/bin/env python3
# Generated by "pythonizer -v0 test_function_templates.pl" v0.978 run by snoopyjc on Thu May 19 10:49:12 2022
# Implied pythonizer options: -m
# test function templates per the perlref documentation
import builtins, perllib, sys
_str = lambda s: "" if s is None else str(s)
perllib.init_package("main")
# SKIPPED: use Carp::Assert;
def _colors(*_args):
return "red blue green yellow orange purple white black".split()
_args = perllib.Array()
builtins.__PACKAGE__ = "main"
for name in _colors():
pass # SKIPPED: no strict 'refs';
def _f10(*_args):
#nonlocal name
return f"<FONT COLOR='{name}'>{perllib.LIST_SEPARATOR.join(map(_str,_args))}</FONT>"
globals()[name] = _f10
print(red("careful"))
assert _str(red("careful")) == "<FONT COLOR='red'>careful</FONT>"
assert _str(green("light")) == "<FONT COLOR='green'>light</FONT>"
perllib.perl_print(f"{sys.argv[0]} - test passed!")
(I commented out the nonlocal because python complains it's a syntax error, and added the print statement). The added print statement writes out <FONT COLOR='black'>careful</FONT> instead of the proper <FONT COLOR='red'>careful</FONT>
How do I get it to capture the red value of the loop counter when the function red is generated?
The function _f10 does not bind parameters name correctly.
So the name used in the function depends on the last result loop. name is in global scope when the loop is run, so you still get result, just not was is expected.
You should bind the name into the function, by adding the name argument to it and resolve partially the function, like this:
from functools import partial
def _f10(name, *_args):
#nonlocal name
return f"<FONT COLOR='{name}'>{perllib.LIST_SEPARATOR.join(map(_str,_args))}</FONT>"
globals()[name] = partial(_f10, name)
So each global is bound to a slightly different function (first argument is bound).
⇒ Finding the identifiers to bind into the function from the perl code could be difficult… You could still try to bind all local variables using locals() in a way, but that would be a bit tricky.
As mentioned in other answers, the problem is name remains a reference to the variable rather than becoming a string literal as intended.
Another way to achieve this is to use a templated string as code, then execute the resulting string. A benefit to this approach is the future reader can validate exactly what's being executed by printing the resulting templated string.
Taking a step back and caring for the domain of the problem, I have created two solutions. First is what I think it would look like if I were to manually translate the code, second is taking your literal example and trying to make it work.
Manually Transcribed
Here I attempt to manually transcribe the Perl code to Python (knowing very little about Perl). I think this illustrates the closest 1:1 behavior to the original while attempting to maintain the spirit of how that's being accomplished in Perl.
This is my recommended solution as it results in a very elegant 1:1 ratio of lines of code accomplishing exactly the same work as the original Perl code per line (if you can excuse what would typically be seen as poor style in the Python paradigm)
import sys
from string import Template
def _colors(*_args):
return "red blue green yellow orange purple white black".split()
for name in _colors():
pass # SKIPPED: no strict 'refs';
eval(compile(Template('''global $name\n$name = lambda x: f"<FONT COLOR='$name'>{x}</FONT>"''').substitute({'name':name}),'<string>', 'exec'))
assert red("careful") == "<FONT COLOR='red'>careful</FONT>"
assert green("light") == "<FONT COLOR='green'>light</FONT>"
print(f"{sys.argv[0]} - test passed!")
Updated OP Code
Here I attempt to replicate the literal code provided by OP to make that code work with as little modification to that as possible. (I prefer the manually transcribed version)
Note that I'm unable to test this as I do not have perllib installed.
#!/usr/bin/env python3
# Generated by "pythonizer -v0 test_function_templates.pl" v0.978 run by snoopyjc on Thu May 19 10:49:12 2022
# Implied pythonizer options: -m
# test function templates per the perlref documentation
import builtins, perllib, sys
from string import Template
_str = lambda s: "" if s is None else str(s)
perllib.init_package("main")
# SKIPPED: use Carp::Assert;
def _colors(*_args):
return "red blue green yellow orange purple white black".split()
_args = perllib.Array()
builtins.__PACKAGE__ = "main"
for name in _colors():
pass # SKIPPED: no strict 'refs';
eval(compile(Template('''
def _f10(*_args):
#nonlocal $name
return f"<FONT COLOR='{$name}'>{perllib.LIST_SEPARATOR.join(map(_str,_args))}</FONT>"
globals()[$name] = _f10
''').substitute({'name':name}),'<string>', 'exec'))
print(red("careful"))
assert _str(red("careful")) == "<FONT COLOR='red'>careful</FONT>"
assert _str(green("light")) == "<FONT COLOR='green'>light</FONT>"
perllib.perl_print(f"{sys.argv[0]} - test passed!")
Additional Considerations
Security - Remote Code Execution
Typically the use of Eval/Exec/Compile should be done with great caution as any input values (in this case, colors) could be an arbitrary code block. That's very bad if the input values can be controlled by the end user in any way. That said this is presumably also true for Perl, and does not matter the solution you choose.
So if for any reason the input data is untrusted, you would want to do some more source validation etc. Normally I would be highly concerned, but IMO I think code execution may be an acceptable risk when translating code from one language to the other. You are probably already executing the original code to validate its functionality, so I'm making the assumption the source has 100% trust.
Clobbering
I'm sure you may be aware, but it is worth noting that there are some serious problems with auto-generating global objects like this. You should probably test what happens when you attempt to define a global with an existing keyword name causing a namespace collision. My expectation is that in Python it will generate an error, and in Perl it will work like a monkeypatch works in Python. You may want to consider adding a prefix to all global variables defined in this way, or make a decision on if this type of behavior of redefining keywords/builtins/existing names is allowable.
Ok this took some doing but I finally figured it out that template functions need to use a nested def in order to capture the template value at the time of templating. Here is the code generated by the latest pythonizer that solves this issue:
#!/usr/bin/env python3
# Generated by "pythonizer stack_72307337.pl" v0.995 run by snoopyjc on Fri Oct 7 23:28:01 2022
# Implied pythonizer options: -m
# test function templates per the perlref documentation
import perllib, sys, builtins
_str = lambda s: "" if s is None else str(s)
perllib.init_package("main")
# SKIPPED: use Carp::Assert;
def _colors(*_args):
return "red blue green yellow orange purple white black".split()
_args = perllib.Array()
builtins.__PACKAGE__ = "main"
for name in _colors():
pass # SKIPPED: no strict 'refs';
def _f10(name):
def _f10template(*_args):
nonlocal name
return f"<FONT COLOR='{name}'>{perllib.LIST_SEPARATOR.join(map(_str,_args))}</FONT>"
return _f10template
globals()[name] = _f10(name)
assert _str(red("careful")) == "<FONT COLOR='red'>careful</FONT>"
assert _str(green("light")) == "<FONT COLOR='green'>light</FONT>"
perllib.perl_print(f"{sys.argv[0]} - test passed!")
I am looking for a way to access the return value of a test function in order to include that value in a test report file (similar to http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures).
Code example that I would like to use:
# modified example code from http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures
import pytest
import os.path
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
if rep.when == "call" and rep.passed:
mode = "a" if os.path.exists("return_values") else "w"
with open("return_values.txt", mode) as f:
# THE FOLLOWING LINE IS THE ONE I CANNOT FIGURE OUT
# HOW DO I ACCESS THE TEST FUNCTION RETURN VALUE?
return_value = item.return_value
f.write(rep.nodeid + ' returned ' + str(return_value) + "\n")
I expect the return value to be written to the file "return_values.txt". Instead, I get an AttributeError.
Background (in case you can recommend a totally different approach):
I have a Python library for data analysis on a given problem. I have a standard set of test data which I routinely run my analysis to produce various "benchmark" metrics on the quality of the analysis algorithms on. For example, one such metric is the trace of a normalized confusion matrix produced by the analysis code (which I would like to be as close to 1 as possible). Another metric is the CPU time to produce an analysis result.
I am looking for a nice way to include these benchmark results into a CI framework (currently Jenkins), such that it becomes easy to see whether a commit improves or degrades the analysis performance. Since I am already running pytest in the CI sequence, and since I would like to use various features of pytest for my benchmarks (fixtures, marks, skipping, cleanup) I thought about simply adding a post-processing hook in pytest (see http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures) that collects test function run time and return values and reports them (or only those which are marked as benchmarks) into a file, which will be collected and archived as a test artifact by my CI framework.
I am open to other ways to solve this problem, but my google search conclusion is that pytest is the framework closest to already providing what I need.
Sharing the same problem, here is a different solution i came up with:
using the fixture record_property in the test:
def test_mytest(record_property):
record_property("key", 42)
and then in conftest.py we can use the pytest_runtest_teardown hook:
#conftest.py
def pytest_runtest_teardown(item, nextitem):
results = dict(item.user_properties)
if not results:
return
with open(f'{item.name}_return_values.txt','a') as f:
for key, value in results.items():
f.write(f'{key} = {value}\n')
and then the content of test_mytest_return_values.txt:
key = 42
Two important notes:
This code will be executed even if the test failed. I couldn't find a way to get the outcome of the test.
This can be combined with heofling's answer using results = dict(item.user_properties) to obtain the keys and values that were added in the test instead of adding a dict to config and then access it in the test.
pytest ignores test functions return value, as can be seen in the code:
#hookimpl(trylast=True)
def pytest_pyfunc_call(pyfuncitem):
testfunction = pyfuncitem.obj
...
testfunction(**testargs)
return True
You can, however, store anything you need in the test function; I usually use the config object for that. Example: put the following snippet in your conftest.py:
import pathlib
import pytest
def pytest_configure(config):
# create the dict to store custom data
config._test_results = dict()
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
if rep.when == "call" and rep.passed:
# get the custom data
return_value = item.config._test_results.get(item.nodeid, None)
# write to file
report = pathlib.Path('return_values.txt')
with report.open('a') as f:
f.write(rep.nodeid + ' returned ' + str(return_value) + "\n")
Now store the data in tests:
def test_fizz(request):
request.config._test_results[request.node.nodeid] = 'mydata'
I have to edit a python file such that after every if condition, i need to add a line which says
if condition_check:
if self.debug == 1: print "COVERAGE CONDITION #8.3 True (condition_check)"
#some other code
else:
if self.debug == 1: print "COVERAGE CONDITION #8.4 False (condition_check)"
#some other code
The number 8.4(generally y.x) refer to the fact that this if condition is in function number 8(y) (the functions are just sequentially numbers, nothing special about 8) and x is xth if condition in yth function.
and of course, the line that will be added will have to be added with proper indentation. The condition_check is the condition being checked.
For example:
if (self.order_in_cb):
self.ccu_process_crossing_buffer_order()
becomes:
if (self.order_in_cb):
if self.debug == 1: print "COVERAGE CONDITION #8.2 TRUE (self.order_in_cb)"
self.ccu_process_crossing_buffer_order()
How do i achieve this?
EXTRA BACKGROUND:
I have about 1200 lines of python code with about 180 if conditions - i need to see if every if condition is hit during the execution of 47 test cases.
In other words i need to do code coverage. The complication is - i am working with cocotb stimulus for RTL verification. As a result, there is no direct way to drive the stimulus, so i dont see an easy way to use the standard coverage.py way to test coverage.
Is there a way to check the coverage so other way? I feel i am missing something.
If you truly can't use coverage.py, then I would write a helper function that used inspect.stack to find the caller, then linecache to read the line of source, and log that way. Then you only have to change if something: to if condition(something): throughout your file, which should be fairly easy.
Here's a proof of concept:
import inspect
import linecache
import re
debug = True
def condition(label, cond):
if debug:
caller = inspect.stack()[1]
line = linecache.getline(caller.filename, caller.lineno)
condcode = re.search(r"if condition\(.*?,(.*)\):", line).group(1)
print("CONDITION {}: {}".format(label, condcode))
return cond
x = 1
y = 1
if condition(1.1, x + y == 2):
print("it's two!")
This prints:
CONDITION 1.1: x + y == 2
it's two!
I have about 1200 lines of python code with about 180 if conditions - i need to see if every if condition is hit during the execution of 47 test cases. In other words i need to do code coverage. The complication is - i am working with cocotb stimulus for RTL verification.
Cocotb has support for coverage built in (docs)
export COVERAGE=1
# run cocotb however you currently invoke it
I used the following code based on the information given in help.autodesk.com for executing maxscript in Python:
import MaxPlus
test = MaxPlus.FPValue()
#The target node has only one morpher and I want to retrieve it using
# .morpher[1]
bool = MaxPlus.Core.EvalMAXScript("WM3_MC_BuildFromNode $node.morpher[1] 1 $target", test)
print bool
If I print the boolean, this always print: "false". However the following code works (aka the print statement returns true):
import MaxPlus
test = MaxPlus.FPValue()
#The target node has only one morpher
bool = MaxPlus.Core.EvalMAXScript("WM3_MC_BuildFromNode $node.morpher 1 $target", test)
print bool
However I cannot use the latter code since it must be possible in my code that a node has multiple morphers.
Is there a better way using the Python api for maxscript (I didn't find a method) or can anyone give suggestions how the first code can be improved.
Thank you
The solution to my problem is:
MaxPlus.Core.EvalMAXScript(WM3_MC_BuildFromNode(for mod in $node.modifiers where isKindOf mod Morpher collect mod)[1] 3 $target)
This solution is found by Swordslayer on the autodesk forum for 3ds Max
I'm mildly confused. I'm testing a django application with python's unittest library. All of a sudden, after running my tests with 100 % success for some minutes, suddenly an error appears. Ok I thought, I must have just added some stupid syntax error. I started looking at the test and then my code, I then tried to print out the results which are being compared with assertEqual before they are compared. Suddenly if I do that, the test runs!!! :o
Why is this? has anyone experienced this before. I swear, the only change I made was adding a print statement inside my test function. I'll post this function before and after
Before (Fails)
def test_swap_conditionals(self):
"""
Test conditional template keys
"""
testStr = "My email is: {?email}"
swapStr = self.t.swap(testStr)
# With email
self.assertEqual(swapStr, "My email is: john#baconfactory.com")
# Without email
self.t.template_values = {"phone" : "00458493"}
swapStr = self.t.swap(testStr)
self.assertEqual(swapStr, "My email is: ")
After (Success)
def test_swap_conditionals(self):
"""
Test conditional template keys
"""
testStr = "My email is: {?email}"
swapStr = self.t.swap(testStr)
print(swapStr) #diff here
# With email
self.assertEqual(swapStr, "My email is: john#baconfactory.com")
# Without email
self.t.template_values = {"phone" : "00458493"}
swapStr = self.t.swap(testStr)
self.assertEqual(swapStr, "My email is: ")
It looks like there is some external reason.
What you can check:
Rerun the test several times under the same conditions. Is it always failing or passing? Or is it a 'flipper' test? This might be caused by timing issues (although unlikely).
Put the test in its own class, so there are no side effects from other unit tests.
If the test in its own test class was passing the reason is a side effect by:
other unit tests
startup/teardown functionality
Ok well embarrassing, but this was completely my fault. The swap function was looking up every conditional template variable on the line and then iterating over that list one conditional template variable at a time, so either it missed keys it already had or it got lucky and it happened to hit that key.
Example
line: "This is my {?email}, and this is my {?phone}"
finds:
[{?email}, {?phone}]
iterates over [{?email}, {?phone}]
1. {?email}
key being compared = phone : '00549684'
It has phone as a key but it completely disregards it and does not swap it out because
it's just holding {?email} so it simply returns "".
I'm sincerely sorry to waste all your time here. Thanks for good answers. I am refactoring the code now for the better, and definitely taking a coffee break :D