WARN Test results in Robot Framework - python

I have multiple test cases that need to be tested. With every release, I get some new test cases as well as some old ones.
My problem is for old test cases which are failed I have already created a problem ticket in JIRA and in the next release this ticket number is added in [Documentation] Field of the .robot
Now what I want is next time on a new release if the bug is already raised in Jira meaning the documentation section of the robot will contain the ticket number, If the test fails I will label it as WARN in Yellow instead of marking it as Failure.
I have searched a lot and found this thread but according to it I can't do it- Github Issue is there any other way?

You will need to process the Documentation of the test to see if it has a JIRA issue number, if so, then add a Tag (for example TagName) to it. When launching tests with robotframework version 4.0, you call them by passing the option --skiponfailure TagName. You will have those tests marked as SKIPPED.
The parsing of Documentation would be needed to parse before running the actual tests (in a helper test run).

Related

pytest-testrail rerun publishing

I am using pytest-testrail in order to publish some python test cases to testrail. I have a couple test cases that are flaky and use the "#pytest.mark.flaky" in order to rerun the test cases that fail. After the rerun, some test cases will pass (meaning the case failed once and passed on the rerun), but pytest-testrail will publish the failed run on top (meaning the test case will be marked as failed). I can only think of two ways to fix this, either find a way to published the passed run first or find a way to only published the passed run. I do not know how I could perform either of these. Overall, I would like to publish the test case as passed if one of the reruns succeeds.
TestRail supports two methods on how to submit test results via the API:
get_results_for_case (get Failed for the existing case)
add_result_for_case (update Passed for the existing case)
I hope this helps.
Reference:
setup for Python: http://docs.gurock.com/testrail-api2/bindings-python
API: http://docs.gurock.com/testrail-api2/reference-results

Is there a way to change the color of a keyword run in the log file, even if that keyword was successful?

So, for my robotframework project, I have been working on a way to know if a given action in my test case has taken "longer than it should"; this is to say that, bearing in mind results of previous executions, I can have an idea of how long an action normally takes, and, in case of an action surpassing that time, I want that fact to be pointed out in my log file.
To do this, I have used the DateTime Library to compare times, and then, if the time difference between the beginning of the test case and the end of an action is longer than X seconds, a WARN-level log is printed, both to the console and to the log file.
However, in that log file, every single keyword which has been successfully executed appears in green, as in the screenshot:
My question is: is it possible to, if that soft timeout is actually exceeded in a given keyword, have that particular keyword appear in any other colour than green, so as to:
make soft timeouts easier to spot
simplify the process of sharing these results with non-technical management staff?
I don't want to use the [Timeout] feature that is native to Robotframework, as I don't want my tests to fail because of this kind of timeout. I just want to have a simple way of identifying potential performance or temporary network issues using my Robotframework scripts.
In Robot Framework there are only 2 statuses: PASS and FAIL and these are what drive the Red/Green colors. This is inline with the nature of the application that is validates situations which in general should either be pass or fail.
As soon as the performance of a keyword or test case becomes interesting then I think this no longer should be part of the test suite itself. It's not difficult to compare the timings of a single keyword/test case against a base line. But what if you want to compare the results over a longer period of time. What is you want to compare one environment against another. Then this setup won't work. For this reason I'd recommend using a reporting or BI tool to load the results and create performance reports.
If that is not possible, then I would probably use Test Case tags and use Test Case messages to mark those test that exceeded their baseline. In the below example there are two test cases where the first one passes but exceeds the set baseline. Through the report functionality you can then filter on the tag and use the Test Case message to present a logical error.
*** Settings ***
Library DateTime
Test Setup Setup Metrics
Test Teardown Check Metrics
*** Variables ***
&{baseline} TC - Pass=1 TC - Fail=2
*** Test Cases ***
TC - Pass
Sleep 2s
No Operation
TC - Fail
Fail
*** Keywords ***
Setup Metrics
${tc_start} Get Current Date result_format=epoch
Set Test Variable ${tc_start} ${tc_start}
Check Metrics
${tc_end} Get Current Date result_format=epoch
${tc_duration} Subtract Time From Time ${tc_end} ${tc_start}
Run Keyword If ${tc_duration} > ${baseline['${TEST NAME}']} Mark Test ${tc_duration} ${baseline['${TEST NAME}']}
Mark Test
[Arguments] ${duration} ${baseline}
Set Tags Time Out
${difference} Subtract Time From Time ${duration} ${baseline}
Set Test Message The duration of ${duration} sec exceeded the baseline by ${difference} sec append=True
Not in a way you describe it. The keyword which is green is Run Keyword If and it has been executed successfully.
A possible workaround:
Use the keyword Run Keyword And Continue On Failure or Run Keyword And Ignore Error (I can't say which one exactly) in a following way:
Run Keyword And Continue On Failure Should Be True ${timeDiff} <= ${maxTimeout}
So, if the expression ${timeDiff} <= ${maxTimeout} is not true, then Should Be True fails and will be marked red in the report. However you'll need to expand the report to this keyword because it is not in the top level.

automation of re-enabling a pytest

In my testing platform if a tests starts to fail we create a bug report for it and mark the test as xfail or skipif using pytest decorators. In the reason parameter we always associate the bug report number. Once a new release is released that is suppose to fix one of the bugs we have to manually go in and re enable the test. Manual work doesn't get done 100% of the time though.
Is there anyway to automate the removal of a pytest xfail or skipif marker assuming during each release I will get a list of bug report numbers that are "fixed"?

Get the results of previous tests

Is it possible to get the results of previous tests in pytest? Or drop test execution if a few specific test failure?
for example:
pytest.mark.skipif(test_failed("test1", "test2"), reason="Reason")
def test_the_unknown(): pass
there is nothing builtin for that kind of behavior atm
but its possible to create tools for that as plugins,
if you are up to it, please get in touch on github, the ml or the irc channels

why would a django test fail only when the full test suite is run?

I have a test in Django 1.5 that passes in these conditions:
when run by itself in isolation
when the full TestCase is run
when all of my app's tests are run
But it fails when the full test suite is run with python manage.py test. Why might this be happening?
The aberrant test uses django.test.Client to POST some data to an endpoint, and then the test checks that an object was successfully updated. Could some other app be modifying the test client or the data itself?
I have tried some print debugging and I see all of the data being sent and received as expected. The specific failure is a does-not-exist exception that is raised when I try to fetch the to-be-updated object from the db. Strangely, in the exception handler itself, I can query for all objects of that type and see that the target object does in fact exist.
Edit:
My issue was resolved when I found that I was querying for the target object by id and User and not id and UserProfile, but it's still confusing to me that this would work in some cases but fail in others.
I also found that the test would fail with python manage.py test auth <myapp>
It sounds like your problem does not involve mocks, but I just spent all day debugging an issue with similar symptoms, and your question is the first one that came up when I was searching for a solution, so I wanted to share my solution here, just in case it will prove helpful for others. In my case, the issue was as follows.
I had a single test that would pass in isolation, but fail when run as part of my full test suite. In one of my view functions I was using the Django send_mail() function. In my test, rather than having it send me an email every time I ran my tests, I patched send_mail in my test method:
from mock import patch
...
def test_stuff(self):
...
with patch('django.core.mail.send_mail') as mocked_send_mail:
...
That way, after my view function is called, I can test that send_mail was called with:
self.assertTrue(mocked_send_mail.called)
This worked fine when running the test on its own, but failed when run with other tests in the suite. The reason this fails is that when it runs as part of the suite other views are called beforehand, causing the views.py file to be loaded, causing send_mail to be imported before I get the chance to patch it. So when send_mail gets called in my view, it is the actual send_mail that gets called, not my patched version. When I run the test alone, the function gets mocked before it is imported, so the patched version ends up getting imported when views.py is loaded. This situation is described in the mock documentation, which I had read a few times before, but now understand quite well after learning the hard way...
The solution was simple: instead of patching django.core.mail.send_mail I just patched the version that had already been imported in my views.py - myapp.views.send_mail. In other words:
with patch('myapp.views.send_mail') as mocked_send_mail:
...
Try this to help you debug:
./manage.py test --reverse
In my case I realised that one test was updating certain data which would cause the following test to fail.
Another possibility is that you've disconnected signals in the setUp of a test class and did not re-connect in the tearDown. This explained my issue.
There is a lot of nondeterminism that can come from tests that involve the database.
For instance, most databases don't offer deterministic selects unless you do an order by. This leads to the strange behavior where when the stars align, the database returns things in a different order than you might expect, and tests that look like
result = pull_stuff_from_database()
assert result[0] == 1
assert result[1] == 2
will fail because result[0] == 2 and result[1] == 1.
Another source of strange nondeterministic behavior is the id autoincrement together with sorting of some kind.
Let's say each tests creates two items and you sort by item name before you do assertions. When you run it by itself, "Item 1" and "Item 2" work fine and pass the test. However, when you run the entire suite, one of the tests generates "Item 9" and "Item 10". "Item 10" is sorted ahead of "Item 9" so your test fails because the order is flipped.
So I first read #elethan's answer and went 'well this is certainly not my problem, I'm not patching anything'. But it turned out I was indeed patching a method in a different test suite, which stayed permanently patched for the rest of the time the tests were run.
I had something of this sort going on;
send_comment_published_signal_mock = MagicMock()
comment_published_signal.send = send_comment_published_signal_mock
You can see why this would be a problem if some things are not cleaned up after running the test suite. The solution in my case was to use the helpful with to restrict the scope.
signal = 'authors.apps.comments.signals.comment_published_signal.send'
with patch(signal) as comment_published_signal_mock:
do_your_test_stuff()
This is the easy part though, after you know where to look. The guilty test could come from anywhere. The solution to this is to run the failing test and other tests together until you find the cause, and then again, progressively narrowing it down, module by module.
Something like;
./manage.py test A C.TestModel.test_problem
./manage.py test B C.TestModel.test_problem
./manage.py test D C.TestModel.test_problem
Then recursively, if for example B is the problem child;
./manage.py test B.TestModel1 C.TestModel.test_problem
./manage.py test B.TestModel2 C.TestModel.test_problem
./manage.py test B.TestModel3 C.TestModel.test_problem
This answer gives a good explanation for all this.
This answer is in the context of django, but can really apply to any python tests.
Good luck.
This was happening to me too.
When running the tests individually they passed, but running all tests with ./manage.py test it failed.
The problem in my case is because I had some tests inheriting from unittest.TestCase instead of from django.test.TestCase, so some tests were failing because there were registers on the database from previous tests.
After making all tests inherit from django.test.TestCase this problem has gone away.
I found the answer on https://stackoverflow.com/a/436795/6490637

Categories

Resources