I am aware about ansible -vvv option but I don't want to see verbose output for all commands, I an interested in seeing details only if the task fails.
How can I achieve this?
PS. Please provide a solution that does scale, having to edit each task would not make any sense.
I think there is only one way: You could edit the default callback plugin (or write your own callback plugin) which you will find here (by default)
site-packages/ansible/plugins/callback/default.py
See line 40 in
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/callback/default.py
and just change the if condition accordingly.
For Example replace lines 40-47 by:
msg = "An exception occurred during task execution. The full traceback is:\n" + result._result['exception']
self._display.display(msg, color=C.COLOR_ERROR)
Related
When talking about pytest we know two things:
When a test pass, no output is given in principle
Sometimes the assertion failures can have very cryptic messages.
I took a course that solved this by using print to clarify desired outputs and calling the pytest as pytest -v -s. I think it is a great solution
Another developer in my company thinks that test code should be as free of "side effects" as possible (and considers prints as side effect). He suggests outputting to a file which I think it is not a good practice. (I think that is an undesirable side effect)
So I would like to hear about this from other developers.
How do you solve the two points given in the beginning and do you use prints in your tests?
As someone already pointed out you can provide your own assert message:
def test_something():
i = 2
assert i == 1, "i should be equal to one"
There should be really no difference between using assert messages and prints, but in case of an assert message only it would be visible in pytest report, and not all the stdout calls:
In this case 0-9 would be printed in pytest report
def test_something():
i = 2
for i in range(10):
print(i)
assert i == 1
Logging everything to a file would definitely make working with pytest harder, and would be a pain to debug if your tests fail in CI.
If you need descriptive messages I would prefer using assert messages and, maybe, prints for debug information.
using print() in your test is not a good solution, you need to see data in cli or on a pipeline.
so for assertion you can share custom messages for assertion in pass or fail case or even in raise an exception
here is a basic tutorial for this
https://docs.pytest.org/en/7.1.x/how-to/assert.html
for general test steps the best way to get the info is logging
logging on different levels
import logging as logger
logger.info('what info you want to share')
logger.error('what info you want to share')
logger.debug('what info you want to share')
for more info you can check this
https://docs.python.org/3/howto/logging.html
I am currently testing a Click CLI application and get result.exit_code == 2. Why does that happen?
This appears to indicate a usage error:
An internal exception that signals a usage error. This typically aborts any further handling.
This is consistent with Click's own tests, e.g.
https://github.com/pallets/click/blob/123dd717439d8620d8d6be5574d2c9f007952326/tests/test_arguments.py#L82
https://github.com/pallets/click/blob/123dd717439d8620d8d6be5574d2c9f007952326/tests/test_arguments.py#L190
https://github.com/pallets/click/blob/123dd717439d8620d8d6be5574d2c9f007952326/tests/test_arguments.py#L201
https://github.com/pallets/click/blob/123dd717439d8620d8d6be5574d2c9f007952326/tests/test_formatting.py#L157
https://github.com/pallets/click/blob/123dd717439d8620d8d6be5574d2c9f007952326/tests/test_formatting.py#L177
https://github.com/pallets/click/blob/123dd717439d8620d8d6be5574d2c9f007952326/tests/test_formatting.py#L193
I ran
result = runner.invoke(cli, ['sync'])
instead of
result = runner.invoke(cli, ['--debug', 'sync'])
So you need to specify the flag as entered via CLI, not only pass the parameters consumed by the function if you use #click.option.
Additionally, the I made a typo for one of the flags.
How to debug
Look at the parameters you pass to runner.invoke (simplest: print it)
Execute it via CLI (e.g. cli(['--debug', 'sync']))
In my case this gave me the message
Error: no such option: --sync Did you mean --syncs?
In Robot Framework, the execution status for each test case can be either PASS or FAIL. But I have a specific requirement to mark few tests as NOT EXECUTED when it fails due to dependencies.
I'm not sure on how to achieve this. I need expert's advise for me to move ahead.
Until a SKIP status is implemented, you can use exitonfailure to stop further execution if a critical test failed, and then change the output.xml (and the tests results.html) to show those tests as "NOT_RUN" (gray color), rather than "FAILED" (red color).
Here's an example (Tested on RobotFramework 3.1.1 and Python 3.6):
First create a new class that extends the abstract class ResultVisitor:
class ResultSkippedAfterCritical(ResultVisitor):
def visit_suite(self, suite):
suite.set_criticality(critical_tags='Critical')
for test in suite.tests:
if test.status == 'FAIL' and "Critical failure occurred" in test.message:
test.status = 'NOT_RUN'
test.message = 'Skipping test execution after critical failure.'
Assuming you've already created the suite (for example with TestSuiteBuilder()), run it without creating report.html and log.html:
outputDir = suite.name.replace(" ", "_")
outputFile = "output.xml"
logger.info(F"Running Test Suite: {suite.name}", also_console=True)
result = suite.run(output=outputFile, outputdir=outputDir, \
report=None, log=None, critical='Critical', exitonfailure=True)
Notice that I've used "Critical" as the identifing tag for critical tests, and exitonfailure option.
Then, revisit the output.xml, and create report.html and log.html from it:
revisitOutputFile = os.path.join(outputDir, outputFile)
logger.info(F"Checking skipped tests in {revisitOutputFile} due to critical failures", also_console=True)
result = ExecutionResult(revisitOutputFile)
result.visit(ResultSkippedAfterCritical())
result.save(revisitOutputFile)
reportFile = 'report.html'
logFile = 'log.html'
logger.info(F"Generating {reportFile} and {logFile}", also_console=True)
writer = ResultWriter(result)
writer.write_results(outputdir=outputDir, report=reportFile, log=logFile)
It should display all the tests after the critical failure with grayed status = "NOT_RUN":
There is nothing you can do, robot only supports two values for the test status: pass and fail. You can mark a test as non-critical so it won't break the build, but it will still show up in logs and reports as having been run.
The robot core team has said they will not support this feature. See issue 1732 for more information.
Even though robot doesn't support the notion of skipped tests, you have the option to write a script that scans output.xml and removes tests that you somehow marked as skipped (perhaps by adding a tag to the test). You will also have to adjust the counts of the failed tests in the xml. Once you've modified the output.xml file, you can use rebot to regenerate the log and report files.
If you only need the change to be made for your log/report files you should take a look here for implementing a SuiteVisitor for the --prerebotmodifier option. As stated by Bryan Oakley, this might screw up your pass/fail count if you don't keep that in mind.
Currently it doesn't seem to be possible to actually alter the test-status before output.xml is created, but there are plans to implement it in RF 3.0. And there is a discussion for a skip status
Another more complex solution would be to create your own output file through implementing a listener to use with the --listener option that creates an output file as it fits your needs (possibly alongside with the original output.xml).
There is also the possibility to set tags during test execution, but im not familar with that yet so I can't really tell anything about that atm. That might be another possibility to account for those dependency-failures, as there are options to ignore certain tagged keywords for the log/report generation
I solved it this way:
Run Keyword If ${blabla}==${True} do-this-task ELSE log to console ${PREV_TEST_STATUS}${yellow}| NRUN |
test not executed and marked as NRUN
Actually, you can SET TAG to run whatever keyword you like (for sanity testing, regression testing...)
Just go to your test script configuration and set tags
And whenever you want to run, just go to Run tab and select check-box Only run tests with these tags / Skip tests with these tags
And click Start button :) Robot framework will select any keyword that match and run it.
Sorry, I don't have enough reputation to post images :(
Does MicroFocus Cobol or any other, have a feature equivalent to Python's sys.settrace()?
The function passed as a parameter to such a tracing function, would be called after the execution of each line of the source code.
It's not an exact equivalent, but you can use READY TRACE for debugging. Enable it with the TRACE compiler directive.
OpenCOBOL supports
-ftrace Generate trace code
- Executed SECTION/PARAGRAPH
-ftraceall Generate trace code
- Executed SECTION/PARAGRAPH/STATEMENTS
- Turned on by -debug
cobc command line options. This isn't quite the same as the Python point of view, but outputs a tracer round on entry to sections, paragraphs and sentences when enabled. No doubt other compilers will have something equivalent. Along with the READY TRACE, debugging and >>D other debugging features like those allowed with DECLARATIVES. http://opencobol.add1tocobol.com/#declaratives
procedure division.
declaratives.
handle-errors section.
use after standard error procedure on filename-1.
handle-error.
display "Something bad happened with " filename-1 end-display.
.
helpful-debug section.
use for debugging on main-file.
help-me.
display "Just touched " main-file end-display.
.
end declaratives.
I am using the following call for executing the 'aspell' command on some strings in Python:
r,w,e = popen2.popen3("echo " +str(m[i]) + " | aspell -l")
I want to test the success of the function looking at the stdout File Object r. If there is no output the command is successful.
What is the best way to test that in Python?
Thanks in advance.
Best is to use the subprocess module of the standard Python library, see here -- popen2 is old and not recommended.
Anyway, in your code, if r.read(1): is a fast way to test if there's any content in r (if you don't care about what that content might specifically be).
Why don't you use aspell -a?
You could use subprocess as indicated by Alex, but keep the pipe open. Follow the directions for using the pipe API of aspell, and it should be pretty efficient.
The upside is that you won't have to check for an empty line. You can always read from stdout, knowing that you will get a response. This takes care of a lot of problematic race conditions.