Appium/Unittest/Python - preventing app from running again each test - python

Ok please look at my example code
My ConnectBase class:
class ConnectBase(unittest.TestCase):
def setUp(self):
desired_caps = {}
desired_caps['platformName'] = 'Android'
desired_caps['deviceName'] = '4cfe4b59'
desired_caps['platformVersion'] = '5.0'
desired_caps['appPackage'] = 'com.xyz.bookshelf'
desired_caps['appActivity'] = 'com.xyz.bookshelf.MainActivity'
desired_caps['noReset'] = False
self.driver_android = webdriver.Remote('http://localhost:4723/wd/hub', desired_caps)
self.driver_android.implicitly_wait(30)
And my main file with tests:
import unittest
from connect import ConnectBase
from main_screen import MainScreenCheck
class MainTests(ConnectBase, MainScreenCheck):
with open('report.txt', 'w') as f:
f.write("----------Bookshelf----------\n")
def test_bookshelf_tutorial(self):
self.addToFile("Test Tutorial")
self.driver_android.orientation = 'LANDSCAPE'
super(MainTests, self).logout_screen_check()
def test_bookshelf_2(self):
self.addToFile("Test 2")
super(MainTests, self).login_screen_check()
def test_bookshelf_3(self):
self.addToFile("Test 3")
super(MainTests, self).loading_screen_check()
def test_bookshelf_4(self):
self.addToFile("Test 4")
super(MainTests, self).list_check()
if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(MainTests)
unittest.TextTestRunner(verbosity=2).run(suite)
I run script -> it is connecting
It starts "test_bookshelf_tutorial()"
Test passed and i would like to continue with "test_bookshelf_2()" but the app is restarting... and i have to go throught tutorial screen again...
The problem is that every unittest "def test_xyz(self)" application is restarting so I can't use unittest function that shows the passed test in report becouse, each test I must go through everything that I made in tests before
I created my way to make a test report -> I'm adding each test result to txt file... but I wonder if there is a possibility to turn off this app restarting and use normal unittest reports?
Or maybe there is another great way to do reports of automation tests?

Try to make an order to your test cases, some times test depend from each other
in the first step open the application and close it only in the last step:
class MainTests(ConnectBase, TestCase):
def step1(self):
#open the application
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
I suggest that you use a Test framework like 'TESTNG' to define test priority to manage tests order but be sure that the first test is always executed to open the application ;)

Related

Python unittest - how to chose the url on which the tests are executed?

I am somewhat of a beginner in python, i am currently writing a suite of test cases with selenium webdriver using unittest; i have also found a lot of useful answers here, but it's time a ask my first question, i have struggled a lot with this and cannot find a proper answer, so any help is greatly appreciated:
For short, i have a suite of multiple tests cases, and in each case the first step is always ".get('Some URL')"; i have written these test cases for a single environment, but i would like to be able to select the URL on which all tests will be executed. In the example below i called the "access_url" method with a specific environment, but i need to do this for all of my scenarios at once, is it possible to do this from where i execute the .py file (e.g. "python example.py")? or to pass it in the .run() method when i select what suite to run?
import HTMLTestRunner
from selenium import webdriver
import unittest
This is a custom class used to create the 'access_url' method
def MyClass(object):
def __init__(self, driver):
self.driver = driver
def access_url(self, URL):
if URL == 'environment 1':
self.driver.get('https://www.google.com/')
elif URL == 'environment 2':
self.driver.get('https://example.com/')
In the classes i use to write test cases the first step is always 'access URL'
class TestScenario01(unittest.TestCase):
def setUp(self):
[...]
def test_01_access(self):
MyClass(self.driver).access_url(URL='environment 2')
def test_02(self):
[...]
def test_03(self):
[...]
In order to run the tests i place them all in a suite and use .run() on them
tc_scenario01 = unittest.TestLoader().loadTestsFromTestCase(TestScenario01)
test_suite = unittest.TestSuite([tc_scenario01])
HTMLReporterCustom.HTMLTestRunner().run(test_suite)
Finally, in order to execute the script i type the follwoing line in CMD: 'python example_file.py
As i mentioned above, all i want to do is to be able to somehow pass the URL one time to all test cases that call the "access_url()" method. Thanks!
You can maintain environment properties in separate config file.
config.py
DEFAULT_ENVIRONMENT='environment1'
URL = {
'environment1': 'https://www.google.com/',
'environment2': 'https://example.com/'
}
Your Class,
from package import config
def MyClass(object):
def __init__(self, driver):
self.driver = driver
def access_url(self):
self.driver.get(config.URL[config.DEFAULT_ENVIRONMENT])
Then test class will be as expected,
class TestScenario01(unittest.TestCase):
def setUp(self):
[...]
def test_01_access(self):
MyClass(self.driver).access_url()
def test_02(self):
[...]
def test_03(self):
[...]
While running test you can change,
main.py
from package import config
config.DEFAULT_ENVIRONMENT = 'enviroment2'
tc_scenario01 = unittest.TestLoader().loadTestsFromTestCase(TestScenario01)
test_suite = unittest.TestSuite([tc_scenario01])
HTMLReporterCustom.HTMLTestRunner().run(test_suite)
You can also pass the environment name while running python main.py.
main.py
if __name__ == '__main__':
config.DEFAULT_ENVIRONMENT = sys.argv[1] if len(sys.argv) > 2 else 'dev'
tc_scenario01 = unittest.TestLoader().loadTestsFromTestCase(TestScenario01)
test_suite = unittest.TestSuite([tc_scenario01])
HTMLReporterCustom.HTMLTestRunner().run(test_suite)

Create screenshot only on test failure

Currently I have a function that creates a screenshot and I call it here
def tearDown(self):
self.save_screenshot()
self.driver.quit()
There is also a folder being created which is used to store the screenshots.
I don't want this to happen when the test passes.
What do I have to add in order for this not to happen?
Thanks for all the help
If your test failed, the sys.exc_info will have an exception. So you can use it as pass/fail result of your test:
if sys.exc_info()[0]:
# 'Test Failed'
else:
# 'Test Passed'
And if you want to take a screenshot on failure:
import unittest
import sys
from selenium import webdriver
class UrlTest(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Chrome()
def test_correct_url(self):
self.driver.get('https://google.com')
self.assertTrue('something.com' in self.driver.current_url)
def tearDown(self):
if sys.exc_info()[0]:
self.driver.get_screenshot_as_file('screenshot.png')
self.driver.quit
if __name__ == '__main__':
unittest.main()
Here is one way to capture screenshot only when failure:
def setUp(self):
# Assume test will fail
self.test_failed = True
def tearDown(self):
if self.test_failed:
self.save_screenshot()
def test_something(self):
# do some tests
# Last line of the test:
self.test_failed = False
The rationale behind this approach is when the test reaches the last line, we know that the test passed (e.g. all the self.assert* passed). At this point, we reset the test_failed member, which was set to True in the setUp. In tearDown, we now can tell if a test passed or failed and take screenshot when appropriate.
In your initialisation method set a self.NoFailuresSnapped = 0 and check your test environment for the current number of failures being > self.NoFailuresSnapped before calling or within your function and of course set it again before returning.

How to benchmark unit tests in Python without adding any code

I have a Python project with a bunch of tests that have already been implemented, and I'd like to begin benchmarking them so I can compare performance of the code, servers, etc over time. Locating the files in a manner similar to Nose was no problem because I have "test" in the names of all my test files anyway. However, I'm running into some trouble in attempting to dynamically execute these tests.
As of right now, I'm able to run a script that takes a directory path as an argument and returns a list of filepaths like this:
def getTestFiles(directory):
fileList = []
print "Searching for 'test' in " + directory
if not os.path.isdir(os.path.dirname(directory)):
# throw error
raise InputError(directory, "Not a valid directory")
else:
for root, dirs, files in os.walk(directory):
#print files
for f in files:
if "test" in f and f.endswith(".py"):
fileList.append(os.path.join(root, f))
return fileList
# returns a list like this:
# [ 'C:/Users/myName/Desktop/example1_test.py',
# 'C:/Users/myName/Desktop/example2_test.py',
# 'C:/Users/myName/Desktop/folder1/example3_test.py',
# 'C:/Users/myName/Desktop/folder2/example4_test.py'... ]
The issue is that these files can have different syntax, which I'm trying to figure out how to handle. For example:
TestExampleOne:
import dummy1
import dummy2
import dummy3
class TestExampleOne(unittest.TestCase):
#classmethod
def setUpClass(cls):
# set up
def test_one(self):
# test stuff
def test_two(self):
# test stuff
def test_three(self):
# test stuff
# etc...
TestExampleTwo:
import dummy1
import dummy2
import dummy3
def setup(self):
try:
# config stuff
except Exception as e:
logger.exception(e)
def test_one():
# test stuff
def test_two():
# test stuff
def test_three():
# test stuff
# etc...
TestExampleThree:
import dummy1
import dummy2
import dummy3
def setup(self):
try:
# config stuff
except Exception as e:
logger.exception(e)
class TestExampleTwo(unittest.TestCase):
def test_one(self):
# test stuff
def test_two(self):
# test stuff
# etc...
class TestExampleThree(unittest.TestCase):
def test_one(self):
# test stuff
def test_two(self):
# test stuff
# etc...
# etc...
I would really like to be able to write one module that searches a directory for every file containing "test" in its name, and then executes every unit test in each file, providing execution time for each test. I think something like NodeVisitor is on the right track, but I'm not sure. Even an idea of where to start would be greatly appreciated. Thanks
Using nose test runner would help to discover the tests, setup/teardown functions and methods.
nose-timer plugin would help with benchmarking:
A timer plugin for nosetests that answers the question: how much time
does every test take?
Demo:
imagine you have a package named test_nose with the following scripts inside:
test1.py:
import time
import unittest
class TestExampleOne(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.value = 1
def test_one(self):
time.sleep(1)
self.assertEqual(1, self.value)
test2.py:
import time
value = None
def setup():
global value
value = 1
def test_one():
time.sleep(2)
assert value == 1
test3.py:
import time
import unittest
value = None
def setup():
global value
value = 1
class TestExampleTwo(unittest.TestCase):
def test_one(self):
time.sleep(3)
self.assertEqual(1, value)
class TestExampleThree(unittest.TestCase):
def test_one(self):
time.sleep(4)
self.assertEqual(1, value)
install nose test runner:
pip install nose
install nose-timer plugin:
pip install nose-timer
run the tests:
$ nosetests test_nose --with-timer
....
test_nose.test3.TestExampleThree.test_one: 4.0003s
test_nose.test3.TestExampleTwo.test_one: 3.0010s
test_nose.test2.test_one: 2.0011s
test_nose.test1.TestExampleOne.test_one: 1.0005s
----------------------------------------------------------------------
Ran 4 tests in 10.006s
OK
The result is actually conveniently highlighted:
The coloring can be controlled by --timer-ok and --timer-warning arguments.
Note that time.sleep(n) calls were added for making the manual slowdowns to see the impact clearly. Also note that value variable is set to 1 in "setup" functions and methods, then in test function and methods the value is asserted to be 1 - this way you can see the work of setup functions.
UPD (running nose with nose-timer from script):
from pprint import pprint
import nose
from nosetimer import plugin
plugin = plugin.TimerPlugin()
plugin.enabled = True
plugin.timer_ok = 1000
plugin.timer_warning = 2000
plugin.timer_no_color = False
nose.run(plugins=[plugin])
result = plugin._timed_tests
pprint(result)
Save it into the test.py script and pass a target directory to it:
python test.py /home/example/dir/tests --with-timer
The result variable would contain:
{'test_nose.test1.TestExampleOne.test_one': 1.0009748935699463,
'test_nose.test2.test_one': 2.0003929138183594,
'test_nose.test3.TestExampleThree.test_one': 4.000233173370361,
'test_nose.test3.TestExampleTwo.test_one': 3.001115083694458}

How to run python unittests repeatedly from a script and collect results

I cannot figure out how to run single unit tests from within a python script and collect the results.
Scenario: I have a battery of tests that check various methods producing various statistical distributions of different objects. The tests sometimes fail, as they should given that I am basically checking for particular kinds of randomness. I would like to run the tests repeatedly from a script or even from the interpreter and collect results for further analysis.
Suppose I have a module myTest.py with:
class myTest(unittest.TestCase):
def setup(self):
...building objects, etc....
def testTest1(self):
..........
def testTest2(self):
..........
Basically I need to:
run the setup method
run testTest1 (say), 100 times
collect the failures
return the failures
The closest I got to was (using code from a similar question):
from unittest import TextTestRunner, TestSuite
runner = TextTestRunner(verbosity = 2)
tests = ['testTest1']
suite = unittest.TestSuite(map(myTest, tests))
runner.run(suite)
But this does not work, because:
runner.run(suite) does not run the setup method
and
I cannot catch the exception it throws when testTest1 fails
you simply need to add the test that you want to run multiple times to the suite.
Here is a complete code example. You can also see this code running in an interactive Python console to prove that it does actually work.
import unittest
import random
class NullWriter(object):
def write(*_, **__):
pass
def flush(*_, **__):
pass
SETUP_COUNTER = 0
class MyTestCase(unittest.TestCase):
def setUp(self):
global SETUP_COUNTER
SETUP_COUNTER += 1
def test_one(self):
self.assertTrue(random.random() > 0.3)
def test_two(self):
# We just want to make sure this isn't run
self.assertTrue(False, "This should not have been run")
def suite():
tests = []
for _ in range(100):
tests.append('test_one')
return unittest.TestSuite(map(MyTestCase, tests))
results = unittest.TextTestRunner(stream=NullWriter()).run(suite())
print dir(results)
print 'setUp was run', SETUP_COUNTER, 'times'

Trying to implement python TestSuite

I have two test cases (two different files) that I want to run together in a Test Suite. I can get the tests to run just by running python "normally" but when I select to run a python-unit test it says 0 tests run. Right now I'm just trying to get at least one test to run correectly.
import usertest
import configtest # first test
import unittest # second test
testSuite = unittest.TestSuite()
testResult = unittest.TestResult()
confTest = configtest.ConfigTestCase()
testSuite.addTest(configtest.suite())
test = testSuite.run(testResult)
print testResult.testsRun # prints 1 if run "normally"
Here's an example of my test case set up
class ConfigTestCase(unittest.TestCase):
def setUp(self):
##set up code
def runTest(self):
#runs test
def suite():
"""
Gather all the tests from this module in a test suite.
"""
test_suite = unittest.TestSuite()
test_suite.addTest(unittest.makeSuite(ConfigTestCase))
return test_suite
if __name__ == "__main__":
#So you can run tests from this module individually.
unittest.main()
What do I have to do to get this work correctly?
you want to use a testsuit. So you need not call unittest.main().
Use of testsuit should be like this:
#import usertest
#import configtest # first test
import unittest # second test
class ConfigTestCase(unittest.TestCase):
def setUp(self):
print 'stp'
##set up code
def runTest(self):
#runs test
print 'stp'
def suite():
"""
Gather all the tests from this module in a test suite.
"""
test_suite = unittest.TestSuite()
test_suite.addTest(unittest.makeSuite(ConfigTestCase))
return test_suite
mySuit=suite()
runner=unittest.TextTestRunner()
runner.run(mySuit)
All of the code to create a loader and suite is unnecessary. You should write your tests so that they are runnable via test discovery using your favorite test runner. That just means naming your methods in a standard way, putting them in an importable place (or passing a folder containing them to the runner), and inheriting from unittest.TestCase. After you've done that, you can use python -m unittest discover at the simplest, or a nicer third party runner to discover and then run your tests.
If you are trying to manually collect TestCases, this is useful: unittest.loader.findTestCases():
# Given a module, M, with tests:
mySuite = unittest.loader.findTestCases(M)
runner = unittest.TextTestRunner()
runner.run(mySuit)
I am assuming you are referring to running python-unit test against the module that consolidates the two test. It will work if you create test case for that module ie. subclassing unittest.TestCase and having a simple test that starts with the word 'test'.
e.g.
class testall(unittest.TestCase):
def test_all(self):
testSuite = unittest.TestSuite()
testResult = unittest.TestResult()
confTest = configtest.ConfigTestCase()
testSuite.addTest(configtest.suite())
test = testSuite.run(testResult)
print testResult.testsRun # prints 1 if run "normally"
if __name__ == "__main__":
unittest.main()

Categories

Resources