We run unit tests in Python that have previously been hard coded with information such as which server we want tests to run on. Instead, I'd like to pass that information to the test via command line argument. The problem is that using the Python unit testing framework, I'm stuck calling my custom parameters as a single parameter which is then caught by utrunner.py which assumes that the parameter is about which tests to run (regarding test discovery).
So running from IDEA I send out this command to start up the test suite:
C:\Users\glenp\AppData\Local\Programs\Python\Python36-32\python.exe C:\Users\glenp\.IntelliJIdea2016.3\config\plugins\python\helpers\pycharm\utrunner.py C:\Root\svn\trunk\src\test\python\test.py "server=deathStar language=klingon" true
This is the parameters that get read back to me from print(sys.argv):
['C:\\Users\\glenp\\.IntelliJIdea2016.3\\config\\plugins\\python\\helpers\\pycharm\\utrunner.py', 'C:\\Root\\svn\\trunk\\src\\test\\python\\schedulePollTest.py', 'server=deathStar language=klingon', 'true']
Note, I'm not actually calling my own test, I'm calling the utrunner.py with my test as one of the arguments to it.
I get a FileNotFound error: FileNotFoundError: [Errno 2] No such file or directory: 'server=deathStar language=klingon' which kills the test before I get to run it.
I think I need to modify either this:
if __name__ == "__main__":
unittest.main()
or this:
class testThatWontRun(unittest.TestCase):
I COULD modify imp.py, which is throwing the error, but I happen to be on a team and modifying core Python functionality isn't going to scale well at all. (And everyone on the team will be sad)
So, is there a way to phrase my arguments in a way that utrunner.py (and imp.py) will ignore those parameters?
Yes, there is a way to get the utrunner.py to ignore the parameters: put a -- in front of the parameter you want it to ignore.
so server=deathStar becomes --server=deathStar
Thank you rubber ducky :)
Related
I want to write test functions for my code and decided to use pytest. I had a look into this tutorial: https://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest
My real code involves another script, written by me, so I made an example, which also creates the same problem, but does not rely on my other code.
#pytest.fixture()
def example():
value = 10
return value
def test_value(example):
print(example)
assert(example == 10)
test_value(example)
When I run my script with this toy example, the print returns a function:
<function example at 0x0391E540>
and the assertion fails.
If I try to call example() with the parenthesis, I get this:
Failed: Fixture "example_chunks" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I am sure, I am missing something important here, but searching google did not help me, which is why I hope somebody here can provide some assistance.
Remove this line from your script
test_value(example)
Run your script file with pytest file.py
Fixtures will be automatically resolved by pytest
In your example you run code directly and fixtures are just simple functions
I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?
(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something
Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.
Just run the test with --last-failed option (you might need pytest)
Example, I have file1.robot and file2.robotand each has ${var} as the variable. Can I pass 2 different values to this same ${var} in the command line? Something like pabot -v var:one:two file1.robot file2.robot where -v var:one:two would follow the order of the robot files; not by name but by how they were introduced in the command line?
This solution is not 100% what you've asked for, but maybe you can make it work.
In pabot readme file is mentioned something about shared set of variables and acquiring set for each running process. The documentation was bit unclear to me, but if you try following example, you'll see for yourself. It's basically pool of variables and each process can get set of variables from it and when it's done with it, it can return this set back to the pool.
Create your value set valueset.dat
[Set1]
USERNAME=user1
PASSWORD=password1
[Set2]
USERNAME=user2
PASSWORD=password2
create suite1.robot and suite2.robot. I've created 2 suites that are exactly the same. I just wanted to try to run 2 suites in parallel.
*** Settings ***
Library pabot.PabotLib
*** Test Cases ***
Foobar
${valuesetname}= Acquire Value Set
Log ${valuesetname}
${username}= Get Value From Set username
Log ${username}
# Release Value Set
And then run command pabot --pabotlib --resourcefile valueset.dat tests. If you check html report, you'll see that one suite used set1 and other used set2.
Hope this helps.
Cheers!
Another way is to use multiple argument files. One containing the first value for ${var} and the other containing the other.
This will execute the same test suite for both argument files.
pabot --agumentfile1 varone.args --argumentfile2 vartwo.args file.robot
=>
file.robot executed with varone.args
file.robot executed with vartwo.args
In Robot Framework, the execution status for each test case can be either PASS or FAIL. But I have a specific requirement to mark few tests as NOT EXECUTED when it fails due to dependencies.
I'm not sure on how to achieve this. I need expert's advise for me to move ahead.
Until a SKIP status is implemented, you can use exitonfailure to stop further execution if a critical test failed, and then change the output.xml (and the tests results.html) to show those tests as "NOT_RUN" (gray color), rather than "FAILED" (red color).
Here's an example (Tested on RobotFramework 3.1.1 and Python 3.6):
First create a new class that extends the abstract class ResultVisitor:
class ResultSkippedAfterCritical(ResultVisitor):
def visit_suite(self, suite):
suite.set_criticality(critical_tags='Critical')
for test in suite.tests:
if test.status == 'FAIL' and "Critical failure occurred" in test.message:
test.status = 'NOT_RUN'
test.message = 'Skipping test execution after critical failure.'
Assuming you've already created the suite (for example with TestSuiteBuilder()), run it without creating report.html and log.html:
outputDir = suite.name.replace(" ", "_")
outputFile = "output.xml"
logger.info(F"Running Test Suite: {suite.name}", also_console=True)
result = suite.run(output=outputFile, outputdir=outputDir, \
report=None, log=None, critical='Critical', exitonfailure=True)
Notice that I've used "Critical" as the identifing tag for critical tests, and exitonfailure option.
Then, revisit the output.xml, and create report.html and log.html from it:
revisitOutputFile = os.path.join(outputDir, outputFile)
logger.info(F"Checking skipped tests in {revisitOutputFile} due to critical failures", also_console=True)
result = ExecutionResult(revisitOutputFile)
result.visit(ResultSkippedAfterCritical())
result.save(revisitOutputFile)
reportFile = 'report.html'
logFile = 'log.html'
logger.info(F"Generating {reportFile} and {logFile}", also_console=True)
writer = ResultWriter(result)
writer.write_results(outputdir=outputDir, report=reportFile, log=logFile)
It should display all the tests after the critical failure with grayed status = "NOT_RUN":
There is nothing you can do, robot only supports two values for the test status: pass and fail. You can mark a test as non-critical so it won't break the build, but it will still show up in logs and reports as having been run.
The robot core team has said they will not support this feature. See issue 1732 for more information.
Even though robot doesn't support the notion of skipped tests, you have the option to write a script that scans output.xml and removes tests that you somehow marked as skipped (perhaps by adding a tag to the test). You will also have to adjust the counts of the failed tests in the xml. Once you've modified the output.xml file, you can use rebot to regenerate the log and report files.
If you only need the change to be made for your log/report files you should take a look here for implementing a SuiteVisitor for the --prerebotmodifier option. As stated by Bryan Oakley, this might screw up your pass/fail count if you don't keep that in mind.
Currently it doesn't seem to be possible to actually alter the test-status before output.xml is created, but there are plans to implement it in RF 3.0. And there is a discussion for a skip status
Another more complex solution would be to create your own output file through implementing a listener to use with the --listener option that creates an output file as it fits your needs (possibly alongside with the original output.xml).
There is also the possibility to set tags during test execution, but im not familar with that yet so I can't really tell anything about that atm. That might be another possibility to account for those dependency-failures, as there are options to ignore certain tagged keywords for the log/report generation
I solved it this way:
Run Keyword If ${blabla}==${True} do-this-task ELSE log to console ${PREV_TEST_STATUS}${yellow}| NRUN |
test not executed and marked as NRUN
Actually, you can SET TAG to run whatever keyword you like (for sanity testing, regression testing...)
Just go to your test script configuration and set tags
And whenever you want to run, just go to Run tab and select check-box Only run tests with these tags / Skip tests with these tags
And click Start button :) Robot framework will select any keyword that match and run it.
Sorry, I don't have enough reputation to post images :(
The part of my assignment is to create tests for each function. This ones kinda long but I am so confused. I put a link below this function so you can see how it looks like
first code is extremely long because.
def load_profiles(profiles_file, person_to_friends, person_to_networks):
'''(file, dict of {str : list of strs}, dict of {str : list of strs}) -> NoneType
Update person to friends and person to networks dictionaries to include
the data in open file.'''
# for updating person_to_friends dict
update_p_to_f(profiles_file, person_to_friends)
update_p_to_n(profiles_file, person_to_networks)
heres the whole code: http://shrib.com/8EF4E8Z3, I tested it through mainblock and it works.
This is the text file(profiles_file) we were provided that we are using to convert them :
http://shrib.com/zI61fmNP
How do I run test cases for this through nose, what kinda of test outcomes are there? Or am I not being specific enough?
import nose
import a3_functions
def test_load_profiles_
if name == 'main':
nose.runmodule()
I went that far then I didn't know what I can test for the function.
Lets assume the code you wrote so far is in a module called "mycode".
Write a new module called testmycode. (i.e. create a python file called testmycode.py)
In there, import the module you want to test (mycode)
Write a function called testupdate().
In that function, first write a text file (with file.write) that you expect to be valid. Then let update_p_to_f update it. Verify that it did what you expect, using assert. This is a test for reading a text file.
Then you can write a second function called testupdate_write(), where you let your code write to a file -- then verify that what it wrote is correct.
To run the tests, use (on the commandline)
nosetests -sx testmycode.py
Which will load testmycode and run all functions it finds there that start with test.
You probably want to test both the overall output of your program is correct, and that individual parts of your program are correct.
#j13r has already covered how to test the overall correctness of your program for a full run.
You mention that you have four helper functions. You can write tests for these separately.
Testing smaller pieces of your code is helpful because you can test each piece in more numerous and more specific ways than if you only test the whole thing.
The unittest module is a framework for performing tests.