Generate pytest-html report per module in a session - python

I understand that the HTML report is written in the pytest_sessionfinish hookimpl (src). Which if i run multiple modules in one session, it will create only one report.
However, i would want to have one report for each module ran.
With my conftest.py containing:
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
script_name = os.path.splitext(os.path.basename(lib_programname.get_path_executed_script()))[0]
if not os.path.exists('reports'):
os.makedirs('reports')
config.option.htmlpath = 'reports/'+ 'report_' + script_name + ".html"
config.option.self_contained_html = True
If i run: pytest .\test_One.py .\test_Two.py. It generates only one html file, report_test_One.html containing both module results.
However, is there a way i could add in on my conftest.py that it will create reports for each module ran instead of one per session? i.e. report_test_One.html, report_test_Two.html?

Related

Start Flask app with specific settings, run queries and store responses for tests

I have implemented unit tests for my Flask app. I use pytest. My goal is to make sure that a number of pre-defined queries always return the same output (some json).
In order to do that, I use a fixture to start an app with test settings:
# Session for scope, otherwise server is reloaded everytime
#pytest.fixture(scope="session")
def app():
os.environ["FLASK_ENV"] = "development"
os.environ["TEST_SETTINGS"] = os.path.join(
ds.WORK_DIR, "..", "tests", "settings.py"
)
app = create_app()
# http://flask.pocoo.org/docs/1.0/api/#flask.Flask.test_client
app.testing = True
return app
And then I run a test function, which is not really a test function, with pytest:
# Freeze time for consistent output
#pytest.mark.usefixtures("live_server")
#pytest.mark.freeze_time("2018-01-01")
class TestLiveServer:
#pytest.mark.skip(reason="should only be used to update the test data")
def test_export(self):
"""Not a test function.
Used to export questions outputs in json file to freeze the expected output
"""
for q in TESTED_QUESTIONS:
r = requests.post(
url_for("query_direct", _external=True), json={"query": q}
)
print(r.text)
filename = (
"".join(filter(lambda c: str.isalnum(c) or c == " ", q))
.lower()
.replace(" ", "_")
+ ".json"
)
export_path = os.path.join("tests", "fake_responses", filename)
data = {"query": q, "response": r.json()}
with open(export_path, "w") as outfile:
json.dump(data, outfile, indent=4, sort_keys=True)
outfile.write("\n")
To generate my frozen outputs, I uncomment the pytest marker and I run this particular test. As you can see, it's not very elegant and error prone. Ssometimes I forget to re-enable the marker, annd if I run all the tests at once, it first re-generates the fake outputs before running unit tests on them. If this happens, my tests don't fail and I don't catch the potential mistakes (whhich kills the point of these tests).
Is there a way to run this particular function alone, maybe with some pytest flag or something?
This should probably be a standalone thing and not part of the pytest test suite (since it's not a test), but you can get around it by exploiting pytest's skipping facilities.
# redefine test_export to feature a conditional skip
#pytest.mark.skipif(not os.getenv('FREEZE_OUTPUTS'), reason='')
def test_export(self):
Here we skip this test, unless the FREEZE_OUTPUTS environment variable is set.
You can then run this test by calling the following from the command line (define the environment variable for this invocation only):
$ FREEZE_OUTPUTS=1 py.test <your_test_file>::TestLiveServer::test_export
Which will only run that test. In all other cases it will be skipped.
You can even follow the above approach and declare this as a fixture that you include on a session level with autouse=True so it's always included and then in the fixture itself add some logic to check if you've defined FREEZE_OUTPUTS and if so, run the logic in question. Something like:
#pytest.fixture(scope='session', autouse=True)
def frozen_outputs():
if os.getenv('FREEZE_OUTPUTS'):
# logic for generating the outputs goes here
return # do nothing otherwise

dynamically creating output folder based on timestamp

I used to use a runner.py with my pytest framework in order to get around the bug with combining markers and string params, e.g.:
-k 'foo' -m 'bar'
I also used the runner to get the test run's starting timestamp and create an output folder output// to which I write my logs and the html report, save any screenshots, etc.
runner.py excerpt:
timestamp = time.strftime('%y%m%d-%H%M%S')
# the following are used by conftest.py
output_path = utils.generate_output_path(timestamp)
folder = utils.create_output_folder(output_path)
def main():
args = sys.argv[1:]
args.append('-v')
args.append('--timestamp=%s' % timestamp)
args.append('--output_target=%s' % folder)
args.append('--html=%s/results.html' % folder)
pytest.main(args, plugins=plgns)
if __name__ == "__main__":
main()
I want to lose the runner.py and use straight CLI args and fixtures/hooks but not manually pass in timestamp, output_target, or the html report path, but have so far been unable find a way to change that config, for example by modifying config.args.
How can I dynamically write timestamp, output_target, and the html path so that pytest uses them during initialization?
Here's what I did:
in my pytest.ini I added a default command line option for the html report, so that there is a configuration attribute that I can modify:
addopts = --html=output/report.html
in my conftest.py, I added this pytest_configure() call
def pytest_configure(config):
"""
Set up the output folder, logfile, and html report file;
this has to be done right after the command line options
are parsed, because we need to rewrite the pytest-html path.
:param config: pytest Config object
:return: None
"""
# set the timestamp for the start of this test run
timestamp = time.strftime('%y%m%d-%H%M%S')
# create the output folder for this test run
output_path = utils.generate_output_path(timestamp)
folder = utils.create_output_folder(output_path)
# start logging
filename = '%s/log.txt' % output_path
logging.basicConfig(filename=filename,
level=logging.INFO,
format='%(asctime)s %(name)s.py::%(funcName)s() [%(levelname)s] %(message)s')
initial_report_path = Path(config.option.htmlpath)
report_path_parts = list(initial_report_path.parts)
logger.info('Output folder created at "%s".' % folder)
logger.info('Logger started and writing to "%s".' % filename)
# insert the timestamp
output_index = report_path_parts.index('output')
report_path_parts.insert(output_index + 1, timestamp)
# deal with doubled slashes
new_report_path = Path('/'.join(report_path_parts).replace('//', '/'))
# update the pytest-html path
config.option.htmlpath = new_report_path
logger.info('HTML test report will be created at "%s".' % config.option.htmlpath)
This is logged as:
2018-03-03 14:07:39,632 welkin.tests.conftest.py::pytest_configure() [INFO] HTML test report will be created at "output/180303-140739/report.html".
The html report is written to the appropriate output/<timestamp>/ folder. That's the desired result.

passing a URL to test against with nose testconfig

is there a way to configure test-config so that i can pass a URL in command line i.e. 'nosetests --tc=systest_url test_suit.py'
I need to run my selenium tests against dev and systest environments when performing builds on teamcity. Our team decided to use python for UI tests and Im more a Java guy and I'm tring to figure out how the plugin works it looks like i can store the url in yaml and pass the file to the --tc command but doesn't seem to work
the code i inherited looks like this :
URL = config['test' : 'https://www.google.com', ]
class BaseTestCase(unittest.TestCase, Navigation):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Firefox()
cls.driver.implicitly_wait(5)
cls.driver.maximize_window()
cls.driver.get(URL)
which obviously is not working
There is a plugin for nose, the nosetest-config. You can specify some config file and pass filename to --tc-file arg from nose.
config.ini
[myapp_servers]
main_server = 10.1.1.1
secondary_server = 10.1.1.2
In your test file you can load the config.
test_app.py
from testconfig import config
def test_foo():
main_server = config['myapp_servers']['main_server']
Than, call nose with args
nosetests -s --tc-file example_cfg.ini
As described in docs, you can use other config files type like YAML, JSON or even Python modules.
Using the nose-testconfig --tc option you can override a configuration value. Separate the key from the value using a colon. For example,
nosetests test.py --tc=url:dev.example.com
will make the value available in config['url'].
from testconfig import config
def test_url_is_dev():
assert 'dev' in config['url']

How do I make PyCharm show code coverage of programs that use multiple processes?

Let's say I create this simple module and call it MyModule.py:
import threading
import multiprocessing
import time
def workerThreaded():
print 'thread working...'
time.sleep(2)
print 'thread complete'
def workerProcessed():
print 'process working...'
time.sleep(2)
print 'process complete'
def main():
workerThread = threading.Thread(target=workerThreaded)
workerThread.start()
workerProcess = multiprocessing.Process(target=workerProcessed)
workerProcess.start()
workerThread.join()
workerProcess.join()
if __name__ == '__main__':
main()
And then I throw this together to unit test it:
import unittest
import MyModule
class MyModuleTester(unittest.TestCase):
def testMyModule(self):
MyModule.main()
unittest.main()
(I know this isn't a good unit test because it doesn't actually TEST it, it just runs it, but that's not relevant to my question)
If I run this unit test in PyCharm with code coverage, then it only shows the code inside the workerThreaded() and main() functions as being covered, even though it clearly covers the workerProcessed() function as well.
How do I get PyCharm to include code that was started in a new process process in its code coverage? Also, how can I get it to include the if __name__ == '__main__': block as well?
I'm running PyCharm 2.7.3, as well as Python 2.7.3.
Coverage.py can measure code run in subprocesses, details are at http://nedbatchelder.com/code/coverage/subprocess.html
I managed to make it work with subprocesses, not sure if this will work with threads or with python 2.
Create .covergerc file in your project root
[run]
concurrency=multiprocessing
Create sitecustomize.py file in your project root
import atexit
from glob import glob
import os
from functools import partial
from shutil import copyfile
from tempfile import mktemp
def combine_coverage(coverage_pattern, xml_pattern, old_coverage, old_xml):
from coverage.cmdline import main
# Find newly created coverage files
coverage_files = [file for file in glob(coverage_pattern) if file not in old_coverage]
xml_files = [file for file in glob(xml_pattern) if file not in old_xml]
if not coverage_files:
raise Exception("No coverage files generated!")
if not xml_files:
raise Exception("No coverage xml file generated!")
# Combine all coverage files
main(["combine", *coverage_files])
# Convert them to xml
main(["xml"])
# Copy combined xml file over PyCharm generated one
copyfile('coverage.xml', xml_files[0])
os.remove('coverage.xml')
def enable_coverage():
import coverage
# Enable subprocess monitoring by providing rc file and enable coverage collecting
os.environ['COVERAGE_PROCESS_START'] = os.path.join(os.path.dirname(__file__), '.coveragerc')
coverage.process_startup()
# Get current coverage files so we can process only newly created ones
temp_root = os.path.dirname(mktemp())
coverage_pattern = '%s/pycharm-coverage*.coverage*' % temp_root
xml_pattern = '%s/pycharm-coverage*.xml' % temp_root
old_coverage = glob(coverage_pattern)
old_xml = glob(xml_pattern)
# Register atexit handler to collect coverage files when python is shutting down
atexit.register(partial(combine_coverage, coverage_pattern, xml_pattern, old_coverage, old_xml))
if os.getenv('PYCHARM_RUN_COVERAGE'):
enable_coverage()
This basically detects if the code is running in PyCharm Coverage and collects newly generated coverage files. There are multiple files, one for the main process and one for each subprocess. So we need to combine them with "coverage combine" then convert them to xml with "coverage xml" and copy the resulted file over PyCharm's generated xml file.
Note that if you kill the child process in you tests coverage.py will not write the data file.
It does not require anything else just hit "Run unittests with Coverage" button in PyCharm.
That's it.

How do I run all Python unit tests in a directory?

I have a directory that contains my Python unit tests. Each unit test module is of the form test_*.py. I am attempting to make a file called all_test.py that will, you guessed it, run all files in the aforementioned test form and return the result. I have tried two methods so far; both have failed. I will show the two methods, and I hope someone out there knows how to actually do this correctly.
For my first valiant attempt, I thought "If I just import all my testing modules in the file, and then call this unittest.main() doodad, it will work, right?" Well, turns out I was wrong.
import glob
import unittest
testSuite = unittest.TestSuite()
test_file_strings = glob.glob('test_*.py')
module_strings = [str[0:len(str)-3] for str in test_file_strings]
if __name__ == "__main__":
unittest.main()
This did not work, the result I got was:
$ python all_test.py
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
For my second try, I though, ok, maybe I will try to do this whole testing thing in a more "manual" fashion. So I attempted to do that below:
import glob
import unittest
testSuite = unittest.TestSuite()
test_file_strings = glob.glob('test_*.py')
module_strings = [str[0:len(str)-3] for str in test_file_strings]
[__import__(str) for str in module_strings]
suites = [unittest.TestLoader().loadTestsFromName(str) for str in module_strings]
[testSuite.addTest(suite) for suite in suites]
print testSuite
result = unittest.TestResult()
testSuite.run(result)
print result
#Ok, at this point I have a result
#How do I display it as the normal unit test command line output?
if __name__ == "__main__":
unittest.main()
This also did not work, but it seems so close!
$ python all_test.py
<unittest.TestSuite tests=[<unittest.TestSuite tests=[<unittest.TestSuite tests=[<test_main.TestMain testMethod=test_respondes_to_get>]>]>]>
<unittest.TestResult run=1 errors=0 failures=0>
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I seem to have a suite of some sort, and I can execute the result. I am a little concerned about the fact that it says I have only run=1, seems like that should be run=2, but it is progress. But how do I pass and display the result to main? Or how do I basically get it working so I can just run this file, and in doing so, run all the unit tests in this directory?
With Python 2.7 and higher you don't have to write new code or use third-party tools to do this; recursive test execution via the command line is built-in. Put an __init__.py in your test directory and:
python -m unittest discover <test_directory>
# or
python -m unittest discover -s <directory> -p '*_test.py'
You can read more in the python 2.7
or python 3.x unittest documentation.
Update for 2021:
Lots of modern python projects use more advanced tools like pytest. For example, pull down matplotlib or scikit-learn and you will see they both use it.
It is important to know about these newer tools because when you have more than 7000 tests you need:
more advanced ways to summarize what passes, skipped, warnings, errors
easy ways to see how they failed
percent complete as it is running
total run time
ways to generate a test report
etc etc
In python 3, if you're using unittest.TestCase:
You must have an empty (or otherwise) __init__.py file in your test directory (must be named test/)
Your test files inside test/ match the pattern test_*.py.
They can be inside a subdirectory under test/. Those subdirs can be named as anything, but they all need to have an __init__.py file in them
Then, you can run all the tests with:
python -m unittest
Done! A solution less than 100 lines. Hopefully another python beginner saves time by finding this.
You could use a test runner that would do this for you. nose is very good for example. When run, it will find tests in the current tree and run them.
Updated:
Here's some code from my pre-nose days. You probably don't want the explicit list of module names, but maybe the rest will be useful to you.
testmodules = [
'cogapp.test_makefiles',
'cogapp.test_whiteutils',
'cogapp.test_cogapp',
]
suite = unittest.TestSuite()
for t in testmodules:
try:
# If the module defines a suite() function, call it to get the suite.
mod = __import__(t, globals(), locals(), ['suite'])
suitefn = getattr(mod, 'suite')
suite.addTest(suitefn())
except (ImportError, AttributeError):
# else, just load all the test cases from the module.
suite.addTest(unittest.defaultTestLoader.loadTestsFromName(t))
unittest.TextTestRunner().run(suite)
This is now possible directly from unittest: unittest.TestLoader.discover.
import unittest
loader = unittest.TestLoader()
start_dir = 'path/to/your/test/files'
suite = loader.discover(start_dir)
runner = unittest.TextTestRunner()
runner.run(suite)
Well by studying the code above a bit (specifically using TextTestRunner and defaultTestLoader), I was able to get pretty close. Eventually I fixed my code by also just passing all test suites to a single suites constructor, rather than adding them "manually", which fixed my other problems. So here is my solution.
import glob
import unittest
test_files = glob.glob('test_*.py')
module_strings = [test_file[0:len(test_file)-3] for test_file in test_files]
suites = [unittest.defaultTestLoader.loadTestsFromName(test_file) for test_file in module_strings]
test_suite = unittest.TestSuite(suites)
test_runner = unittest.TextTestRunner().run(test_suite)
Yeah, it is probably easier to just use nose than to do this, but that is besides the point.
If you want to run all the tests from various test case classes and you're happy to specify them explicitly then you can do it like this:
from unittest import TestLoader, TextTestRunner, TestSuite
from uclid.test.test_symbols import TestSymbols
from uclid.test.test_patterns import TestPatterns
if __name__ == "__main__":
loader = TestLoader()
tests = [
loader.loadTestsFromTestCase(test)
for test in (TestSymbols, TestPatterns)
]
suite = TestSuite(tests)
runner = TextTestRunner(verbosity=2)
runner.run(suite)
where uclid is my project and TestSymbols and TestPatterns are subclasses of TestCase.
I have used the discover method and an overloading of load_tests to achieve this result in a (minimal, I think) number lines of code:
def load_tests(loader, tests, pattern):
''' Discover and load all unit tests in all files named ``*_test.py`` in ``./src/``
'''
suite = TestSuite()
for all_test_suite in unittest.defaultTestLoader.discover('src', pattern='*_tests.py'):
for test_suite in all_test_suite:
suite.addTests(test_suite)
return suite
if __name__ == '__main__':
unittest.main()
Execution on fives something like
Ran 27 tests in 0.187s
OK
I tried various approaches but all seem flawed or I have to makeup some code, that's annoying. But there's a convinient way under linux, that is simply to find every test through certain pattern and then invoke them one by one.
find . -name 'Test*py' -exec python '{}' \;
and most importantly, it definitely works.
In case of a packaged library or application, you don't want to do it. setuptools will do it for you.
To use this command, your project’s tests must be wrapped in a unittest test suite by either a function, a TestCase class or method, or a module or package containing TestCase classes. If the named suite is a module, and the module has an additional_tests() function, it is called and the result (which must be a unittest.TestSuite) is added to the tests to be run. If the named suite is a package, any submodules and subpackages are recursively added to the overall test suite.
Just tell it where your root test package is, like:
setup(
# ...
test_suite = 'somepkg.test'
)
And run python setup.py test.
File-based discovery may be problematic in Python 3, unless you avoid relative imports in your test suite, because discover uses file import. Even though it supports optional top_level_dir, but I had some infinite recursion errors. So a simple solution for a non-packaged code is to put the following in __init__.py of your test package (see load_tests Protocol).
import unittest
from . import foo, bar
def load_tests(loader, tests, pattern):
suite = unittest.TestSuite()
suite.addTests(loader.loadTestsFromModule(foo))
suite.addTests(loader.loadTestsFromModule(bar))
return suite
This is an old question, but what worked for me now (in 2019) is:
python -m unittest *_test.py
All my test files are in the same folder as the source files and they end with _test.
I use PyDev/LiClipse and haven't really figured out how to run all tests at once from the GUI. (edit: you right click the root test folder and choose Run as -> Python unit-test
This is my current workaround:
import unittest
def load_tests(loader, tests, pattern):
return loader.discover('.')
if __name__ == '__main__':
unittest.main()
I put this code in a module called all in my test directory. If I run this module as a unittest from LiClipse then all tests are run. If I ask to only repeat specific or failed tests then only those tests are run. It doesn't interfere with my commandline test runner either (nosetests) -- it's ignored.
You may need to change the arguments to discover based on your project setup.
Based on the answer of Stephen Cagle I added support for nested test modules.
import fnmatch
import os
import unittest
def all_test_modules(root_dir, pattern):
test_file_names = all_files_in(root_dir, pattern)
return [path_to_module(str) for str in test_file_names]
def all_files_in(root_dir, pattern):
matches = []
for root, dirnames, filenames in os.walk(root_dir):
for filename in fnmatch.filter(filenames, pattern):
matches.append(os.path.join(root, filename))
return matches
def path_to_module(py_file):
return strip_leading_dots( \
replace_slash_by_dot( \
strip_extension(py_file)))
def strip_extension(py_file):
return py_file[0:len(py_file) - len('.py')]
def replace_slash_by_dot(str):
return str.replace('\\', '.').replace('/', '.')
def strip_leading_dots(str):
while str.startswith('.'):
str = str[1:len(str)]
return str
module_names = all_test_modules('.', '*Tests.py')
suites = [unittest.defaultTestLoader.loadTestsFromName(mname) for mname
in module_names]
testSuite = unittest.TestSuite(suites)
runner = unittest.TextTestRunner(verbosity=1)
runner.run(testSuite)
The code searches all subdirectories of . for *Tests.py files which are then loaded. It expects each *Tests.py to contain a single class *Tests(unittest.TestCase) which is loaded in turn and executed one after another.
This works with arbitrary deep nesting of directories/modules, but each directory in between needs to contain an empty __init__.py file at least. This allows the test to load the nested modules by replacing slashes (or backslashes) by dots (see replace_slash_by_dot).
I just created a discover.py file in my base test directory and added import statements for anything in my sub directories. Then discover is able to find all my tests in those directories by running it on discover.py
python -m unittest discover ./test -p '*.py'
# /test/discover.py
import unittest
from test.package1.mod1 import XYZTest
from test.package1.package2.mod2 import ABCTest
...
if __name__ == "__main__"
unittest.main()
Encountered the same issue.
The solution is to add an empty __init__.py to each folder and uses python -m unittest discover -s
Project Structure
tests/
__init__.py
domain/
value_object/
__init__.py
test_name.py
__init__.py
presentation/
__init__.py
test_app.py
And running the command
python -m unittest discover -s tests/domain
To get the expected outcome
.
----------------------------------------------------------------------
Ran 1 test in 0.007s
Because Test discovery seems to be a complete subject, there is some dedicated framework to test discovery :
nose
Py.Test
More reading here : https://wiki.python.org/moin/PythonTestingToolsTaxonomy
This BASH script will execute the python unittest test directory from ANYWHERE in the file system, no matter what working directory you are in: its working directory always be where that test directory is located.
ALL TESTS, independent $PWD
unittest Python module is sensitive to your current directory, unless you tell it where (using discover -s option).
This is useful when staying in the ./src or ./example working directory and you need a quick overall unit test:
#!/bin/bash
this_program="$0"
dirname="`dirname $this_program`"
readlink="`readlink -e $dirname`"
python -m unittest discover -s "$readlink"/test -v
SELECTED TESTS, independent $PWD
I name this utility file: runone.py and use it like this:
runone.py <test-python-filename-minus-dot-py-fileextension>
#!/bin/bash
this_program="$0"
dirname="`dirname $this_program`"
readlink="`readlink -e $dirname`"
(cd "$dirname"/test; python -m unittest $1)
No need for a test/__init__.py file to burden your package/memory-overhead during production.
I have no package and as mentioned on this page, this is creating issue while issing dicovery. So, I used the following solution. All the test result will be put in a given output folder.
RunAllUT.py:
"""
The given script is executing all the Unit Test of the project stored at the
path %relativePath2Src% currently fixed coded for the given project.
Prerequired:
- Anaconda should be install
- For the current user, an enviornment called "mtToolsEnv" should exists
- xmlrunner Library should be installed
"""
import sys
import os
import xmlrunner
from Repository import repository
relativePath2Src="./../.."
pythonPath=r'"C:\Users\%USERNAME%\.conda\envs\YourConfig\python.exe"'
outputTestReportFolder=os.path.dirname(os.path.abspath(__file__))+r'\test-reports' #subfolder in current file path
class UTTesting():
"""
Class tto run all the UT of the project
"""
def __init__(self):
"""
Initiate instance
Returns
-------
None.
"""
self.projectRepository = repository()
self.UTfile = [] #List all file
def retrieveAllUT(self):
"""
Generate the list of UT file in the project
Returns
-------
None.
"""
print(os.path.realpath(relativePath2Src))
self.projectRepository.retriveAllFilePaths(relativePath2Src)
#self.projectRepository.printAllFile() #debug
for file2scan in self.projectRepository.devfile:
if file2scan.endswith("_UT.py"):
self.UTfile.append(file2scan)
print(self.projectRepository.devfilepath[file2scan]+'/'+file2scan)
def runUT(self,UTtoRun):
"""
Run a single UT
Parameters
----------
UTtoRun : String
File Name of the UT
Returns
-------
None.
"""
print(UTtoRun)
if UTtoRun in self.projectRepository.devfilepath:
UTtoRunFolderPath=os.path.realpath(os.path.join(self.projectRepository.devfilepath[UTtoRun]))
UTtoRunPath = os.path.join(UTtoRunFolderPath, UTtoRun)
print(UTtoRunPath)
#set the correct execution context & run the test
os.system(" cd " + UTtoRunFolderPath + \
" & " + pythonPath + " " + UTtoRunPath + " " + outputTestReportFolder )
def runAllUT(self):
"""
Run all the UT contained in self
The function "retrieveAllUT" sjould ahve been performed before
Returns
-------
None.
"""
for UTfile in self.UTfile:
self.runUT(UTfile)
if __name__ == "__main__":
undertest=UTTesting()
undertest.retrieveAllUT()
undertest.runAllUT()
In my project specific, I have a class that I used in other script. This might be an overkill for your usecase.
Repository.py
import os
class repository():
"""
Class that decribed folder and file in a repository
"""
def __init__(self):
"""
Initiate instance
Returns
-------
None.
"""
self.devfile = [] #List all file
self.devfilepath = {} #List all file paths
def retriveAllFilePaths(self,pathrepo):
"""
Retrive all files and their path in the class
Parameters
----------
pathrepo : Path used for the parsin
Returns
-------
None.
"""
for path, subdirs, files in os.walk(pathrepo):
for file_name in files:
self.devfile.append(file_name)
self.devfilepath[file_name] = path
def printAllFile(self):
"""
Display all file with paths
Parameters
----------
def printAllFile : TYPE
DESCRIPTION.
Returns
-------
None.
"""
for file_loop in self.devfile:
print(self.devfilepath[file_loop]+'/'+file_loop)
In your test files, you need to have a main like this:
if __name__ == "__main__":
import xmlrunner
import sys
if len(sys.argv) > 1:
outputFolder = sys.argv.pop() #avoid conflic with unittest.main
else:
outputFolder = r'test-reports'
print("Report will be created and store there: " + outputFolder)
unittest.main(testRunner=xmlrunner.XMLTestRunner(output=outputFolder))
Here is my approach by creating a wrapper to run tests from the command line:
#!/usr/bin/env python3
import os, sys, unittest, argparse, inspect, logging
if __name__ == '__main__':
# Parse arguments.
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument("-?", "--help", action="help", help="show this help message and exit" )
parser.add_argument("-v", "--verbose", action="store_true", dest="verbose", help="increase output verbosity" )
parser.add_argument("-d", "--debug", action="store_true", dest="debug", help="show debug messages" )
parser.add_argument("-h", "--host", action="store", dest="host", help="Destination host" )
parser.add_argument("-b", "--browser", action="store", dest="browser", help="Browser driver.", choices=["Firefox", "Chrome", "IE", "Opera", "PhantomJS"] )
parser.add_argument("-r", "--reports-dir", action="store", dest="dir", help="Directory to save screenshots.", default="reports")
parser.add_argument('files', nargs='*')
args = parser.parse_args()
# Load files from the arguments.
for filename in args.files:
exec(open(filename).read())
# See: http://codereview.stackexchange.com/q/88655/15346
def make_suite(tc_class):
testloader = unittest.TestLoader()
testnames = testloader.getTestCaseNames(tc_class)
suite = unittest.TestSuite()
for name in testnames:
suite.addTest(tc_class(name, cargs=args))
return suite
# Add all tests.
alltests = unittest.TestSuite()
for name, obj in inspect.getmembers(sys.modules[__name__]):
if inspect.isclass(obj) and name.startswith("FooTest"):
alltests.addTest(make_suite(obj))
# Set-up logger
verbose = bool(os.environ.get('VERBOSE', args.verbose))
debug = bool(os.environ.get('DEBUG', args.debug))
if verbose or debug:
logging.basicConfig( stream=sys.stdout )
root = logging.getLogger()
root.setLevel(logging.INFO if verbose else logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.INFO if verbose else logging.DEBUG)
ch.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(name)s: %(message)s'))
root.addHandler(ch)
else:
logging.basicConfig(stream=sys.stderr)
# Run tests.
result = unittest.TextTestRunner(verbosity=2).run(alltests)
sys.exit(not result.wasSuccessful())
For sake of simplicity, please excuse my non-PEP8 coding standards.
Then you can create BaseTest class for common components for all your tests, so each of your test would simply look like:
from BaseTest import BaseTest
class FooTestPagesBasic(BaseTest):
def test_foo(self):
driver = self.driver
driver.get(self.base_url + "/")
To run, you simply specifying tests as part of the command line arguments, e.g.:
./run_tests.py -h http://example.com/ tests/**/*.py

Categories

Resources