In Robot Framework, the execution status for each test case can be either PASS or FAIL. But I have a specific requirement to mark few tests as NOT EXECUTED when it fails due to dependencies.
I'm not sure on how to achieve this. I need expert's advise for me to move ahead.
Until a SKIP status is implemented, you can use exitonfailure to stop further execution if a critical test failed, and then change the output.xml (and the tests results.html) to show those tests as "NOT_RUN" (gray color), rather than "FAILED" (red color).
Here's an example (Tested on RobotFramework 3.1.1 and Python 3.6):
First create a new class that extends the abstract class ResultVisitor:
class ResultSkippedAfterCritical(ResultVisitor):
def visit_suite(self, suite):
suite.set_criticality(critical_tags='Critical')
for test in suite.tests:
if test.status == 'FAIL' and "Critical failure occurred" in test.message:
test.status = 'NOT_RUN'
test.message = 'Skipping test execution after critical failure.'
Assuming you've already created the suite (for example with TestSuiteBuilder()), run it without creating report.html and log.html:
outputDir = suite.name.replace(" ", "_")
outputFile = "output.xml"
logger.info(F"Running Test Suite: {suite.name}", also_console=True)
result = suite.run(output=outputFile, outputdir=outputDir, \
report=None, log=None, critical='Critical', exitonfailure=True)
Notice that I've used "Critical" as the identifing tag for critical tests, and exitonfailure option.
Then, revisit the output.xml, and create report.html and log.html from it:
revisitOutputFile = os.path.join(outputDir, outputFile)
logger.info(F"Checking skipped tests in {revisitOutputFile} due to critical failures", also_console=True)
result = ExecutionResult(revisitOutputFile)
result.visit(ResultSkippedAfterCritical())
result.save(revisitOutputFile)
reportFile = 'report.html'
logFile = 'log.html'
logger.info(F"Generating {reportFile} and {logFile}", also_console=True)
writer = ResultWriter(result)
writer.write_results(outputdir=outputDir, report=reportFile, log=logFile)
It should display all the tests after the critical failure with grayed status = "NOT_RUN":
There is nothing you can do, robot only supports two values for the test status: pass and fail. You can mark a test as non-critical so it won't break the build, but it will still show up in logs and reports as having been run.
The robot core team has said they will not support this feature. See issue 1732 for more information.
Even though robot doesn't support the notion of skipped tests, you have the option to write a script that scans output.xml and removes tests that you somehow marked as skipped (perhaps by adding a tag to the test). You will also have to adjust the counts of the failed tests in the xml. Once you've modified the output.xml file, you can use rebot to regenerate the log and report files.
If you only need the change to be made for your log/report files you should take a look here for implementing a SuiteVisitor for the --prerebotmodifier option. As stated by Bryan Oakley, this might screw up your pass/fail count if you don't keep that in mind.
Currently it doesn't seem to be possible to actually alter the test-status before output.xml is created, but there are plans to implement it in RF 3.0. And there is a discussion for a skip status
Another more complex solution would be to create your own output file through implementing a listener to use with the --listener option that creates an output file as it fits your needs (possibly alongside with the original output.xml).
There is also the possibility to set tags during test execution, but im not familar with that yet so I can't really tell anything about that atm. That might be another possibility to account for those dependency-failures, as there are options to ignore certain tagged keywords for the log/report generation
I solved it this way:
Run Keyword If ${blabla}==${True} do-this-task ELSE log to console ${PREV_TEST_STATUS}${yellow}| NRUN |
test not executed and marked as NRUN
Actually, you can SET TAG to run whatever keyword you like (for sanity testing, regression testing...)
Just go to your test script configuration and set tags
And whenever you want to run, just go to Run tab and select check-box Only run tests with these tags / Skip tests with these tags
And click Start button :) Robot framework will select any keyword that match and run it.
Sorry, I don't have enough reputation to post images :(
Related
Hi everyone it's my first post here. I'm learning test automation with Python and I have a problem.
I have a test class, something like that
test_scenario_1.py
class TestClass:
#pytest.mark.run(order=1)
#data(*getdatafromcsvfile("Logins.csv"))
#unpack
def test_1(self, logins):
self.login(logins)
def test_2(self):
self.display_name()
Logins.csv
Logins
user_A
user_B
user_C
I'm using pytest and when I run test_scenario_1.py from terminal test_1 will be executed 3 times using all of rows from CSV file and then test_2. Is any possibility to run second test after first execute of test_1?
In simple words I want to login for each of users (form csv file) and display their names one by one.
Do you have any ideas what should I do in that case?
Best practice is to always keep tests isolated and run them from a clean browser instance. Each test should launch a new browser, run the test, and then close the browser. That ensures that each test starts clean. What if one of your previous tests runs into an error or puts your site into a bad state? Then all the later tests run on that browser instance fail or are otherwise affected by the bad state and may give false positives or false negatives.
Just add the login to the second test. Without any details on what is in self.login(), it might not be worth having that as a test any more. Just have a single test that is...
def test_1(self, logins):
self.login(logins)
self.display_name()
Please bear with me while I try to explain my predicament, I'm still a Python novice and so my terminology may not be correct. Also I'm sorry for the inevitable long-windedness of this post, but I'll try to expalin in as much relevant detail as possible.
A quick rundown:
I'm currently developing a suite of Selenium tests for a set of websites that are essentially the same in functionality, using py.test
Tests results are uploaded to TestRail, using the pytest plugin pytest-testrail.
Tests are tagged with the decorator #pytestrail.case(id) with a unique case ID
A typical test of mine looks like this:
#pytestrail.case('C100123') # associates the function with the relevant TR case
#pytest.mark.usefixtures()
def test_login():
# test code goes here
As I mentioned before, I'm aiming to create one set of code that handles a number of our websites with (virtually) identical functionality, so a hardcoded decorator in the example above won't work.
I tried a data driven approach with a csv and a list of the tests and their case IDs in TestRail.
Example:
website1.csv:
Case ID | Test name
C100123 | test_login
website2.csv:
Case ID | Test name
C222123 | test_login
The code I wrote would use the inspect module to find the name of the test running, find the relevant test ID and put that into a variable called test_id:
import csv
import inspect
class trp(object):
def __init__(self):
pass
with open(testcsv) as f: # testcsv could be website1.csv or website2.csv
reader = csv.reader(f)
next(reader) # skip header
tests = [r for r in reader]
def gettestcase(self):
self.current_test = inspect.stack()[3][3]
for row in trp.tests:
if self.current_test == row[2]:
self.test_id = (row[0])
print(self.test_id)
return self.test_id, self.current_test
def gettestid(self):
self.gettestcase()
The idea was that the decorator would change dynamically based on the csv that I was using at the time.
#pytestrail.case(test_id) # now a variable
#pytest.mark.usefixtures()
def test_login():
trp.gettestid()
# test code goes here
So if I ran test_login for website1, the decorator would look like:
#pytestrail.case('C100123')
and if I ran test_login for website2 the decorator would be:
#pytestrail.case('C222123')
I felt mighty proud of coming up with this solution by myself and tried it out...it didn't work. While the code does work by itself, I would get an exception because test_id is undefined (I understand why - gettestcase is executed after the decorator, so of course it would crash.
The only other way I can handle this is to apply the csv and testIDs before any test code is executed. My question is - how would I know how to associate the tests with their test IDs? What would an elegant, minimal solution to this be?
Sorry for the long winded question. I'll be watching closely to answer any questions if you need more explanation.
pytest is very good at doing all kind of metaprogramming stuff for the tests. If I understand your question correctly, the code below will do the dynamic tests marking with pytestrail.case marker. In the project root dir, create a file named conftest.py and place this code in it:
import csv
from pytest_testrail.plugin import pytestrail
with open('website1.csv') as f:
reader = csv.reader(f)
next(reader)
tests = [r for r in reader]
def pytest_collection_modifyitems(items):
for item in items:
for testid, testname in tests:
if item.name == testname:
item.add_marker(pytestrail.case(testid))
Now you don't need to mark the test with #pytestrail.case()at all - just write the rest of code and pytest will take care of the marking:
def test_login():
assert True
When pytest starts, the code above will read website1.csv and store the test IDs and names just as you did in your code. Before the test run starts, pytest_collection_modifyitems hook will execute, analyzing the collected tests - if a test has the same name as in csv file, pytest will add the pytestrail.case marker with the test ID to it.
I believe the reason this isn't working as you would expect has to do with how python reads and executes files. When python starts executing it reads in the linked python file(s) and executes each line one-by-one, in turn. For things at the 'root' indention level (function/class definitions, decorators, variable assignments, etc) this means they get run exactly one time as they are loaded in, and never again. In your case, the python interpreter reads in the pytest-testrail decorator, then the pytest decorator, and finally the function definition, executing each one once, ever.
(Side note, this is why you should never use mutable objects as function argument defaults: Common Gotchas)
Given that you want to first deduce the current test name, then associate that with a test case ID, and finally use that ID with the decorator, I'm not sure that is possible with pytest-testrail's current functionality. At least, not possible without some esoteric and difficult to debug/maintain hack using descriptors or the like.
I think you realistically have one option: use a different TestRail client and update your pytest structure to use the new client. Two python clients I can recommend are testrail-python and TRAW (TestRail Api Wrapper)(*)
It will take more work on your part to create the fixtures for starting a run, updating results, and closing the run, but I think in the end you will have a more portable suite of tests and better results reporting.
(*) full disclosure: I am the creator/maintainer of TRAW, and also made significant contributions to testrail-python
We run unit tests in Python that have previously been hard coded with information such as which server we want tests to run on. Instead, I'd like to pass that information to the test via command line argument. The problem is that using the Python unit testing framework, I'm stuck calling my custom parameters as a single parameter which is then caught by utrunner.py which assumes that the parameter is about which tests to run (regarding test discovery).
So running from IDEA I send out this command to start up the test suite:
C:\Users\glenp\AppData\Local\Programs\Python\Python36-32\python.exe C:\Users\glenp\.IntelliJIdea2016.3\config\plugins\python\helpers\pycharm\utrunner.py C:\Root\svn\trunk\src\test\python\test.py "server=deathStar language=klingon" true
This is the parameters that get read back to me from print(sys.argv):
['C:\\Users\\glenp\\.IntelliJIdea2016.3\\config\\plugins\\python\\helpers\\pycharm\\utrunner.py', 'C:\\Root\\svn\\trunk\\src\\test\\python\\schedulePollTest.py', 'server=deathStar language=klingon', 'true']
Note, I'm not actually calling my own test, I'm calling the utrunner.py with my test as one of the arguments to it.
I get a FileNotFound error: FileNotFoundError: [Errno 2] No such file or directory: 'server=deathStar language=klingon' which kills the test before I get to run it.
I think I need to modify either this:
if __name__ == "__main__":
unittest.main()
or this:
class testThatWontRun(unittest.TestCase):
I COULD modify imp.py, which is throwing the error, but I happen to be on a team and modifying core Python functionality isn't going to scale well at all. (And everyone on the team will be sad)
So, is there a way to phrase my arguments in a way that utrunner.py (and imp.py) will ignore those parameters?
Yes, there is a way to get the utrunner.py to ignore the parameters: put a -- in front of the parameter you want it to ignore.
so server=deathStar becomes --server=deathStar
Thank you rubber ducky :)
Example, I have file1.robot and file2.robotand each has ${var} as the variable. Can I pass 2 different values to this same ${var} in the command line? Something like pabot -v var:one:two file1.robot file2.robot where -v var:one:two would follow the order of the robot files; not by name but by how they were introduced in the command line?
This solution is not 100% what you've asked for, but maybe you can make it work.
In pabot readme file is mentioned something about shared set of variables and acquiring set for each running process. The documentation was bit unclear to me, but if you try following example, you'll see for yourself. It's basically pool of variables and each process can get set of variables from it and when it's done with it, it can return this set back to the pool.
Create your value set valueset.dat
[Set1]
USERNAME=user1
PASSWORD=password1
[Set2]
USERNAME=user2
PASSWORD=password2
create suite1.robot and suite2.robot. I've created 2 suites that are exactly the same. I just wanted to try to run 2 suites in parallel.
*** Settings ***
Library pabot.PabotLib
*** Test Cases ***
Foobar
${valuesetname}= Acquire Value Set
Log ${valuesetname}
${username}= Get Value From Set username
Log ${username}
# Release Value Set
And then run command pabot --pabotlib --resourcefile valueset.dat tests. If you check html report, you'll see that one suite used set1 and other used set2.
Hope this helps.
Cheers!
Another way is to use multiple argument files. One containing the first value for ${var} and the other containing the other.
This will execute the same test suite for both argument files.
pabot --agumentfile1 varone.args --argumentfile2 vartwo.args file.robot
=>
file.robot executed with varone.args
file.robot executed with vartwo.args
I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.