Preventing unittest from calling sys.exit() - python

I'm having trouble preventing Unittest from calling sys.exit(). I found Unittest causing sys.exit() searching for an answer.I modified the code to
unittest.TextTestRunner().run(unittest.TestLoader().loadTestsFromTestCase(run_tests.Compare))
and put it in my main. I believe the only thing I changed is that my test is in a seperate file named run_tests.py. It looks as such
import unittest
from list_of_globals import globvar
value1 = globvar.relerror
value2 = globvar.tolerance
class Compare(unittest.TestCase):
def __init__(self,value1,value2):
super(Compare, self).__init__()
self.value1 = value1
self.value2 = value2
def runTest(self):
self.assertTrue(self.value1 < self.value2)
When I run my main function I receive the following error
File "./main.py", line 44, in <module> unittest.TextTestRunner().run(unittest.TestLoader().loadTestsFromTestCase(run_tests.Compare))
File "/usr/lib64/python2.6/unittest.py", line 550, in loadTestsFromTestCase
return self.suiteClass(map(testCaseClass, testCaseNames))
TypeError: __init__() takes exactly 3 arguments (2 given)
I don't understand why this error occurs or how to go about fixing it. Any help would be much appreciated. I am using python 2.6 on linux

Your problem is with your unit test class.
When writing unit tests you are not meant to override init -- as the init method is used to process which tests are to be run. For test configurations, such as setting variables, you should write a method called setUp which takes no parameters. For instance:
class Compare(unittest.TestCase):
def setUp(self):
self.value1 = globvar.relerror
self.value2 = globvar.tolerance
def runTest(self):
self.assertTrue(self.value1 < self.value2)
The problem in the linked question is that unittest.main exits python after all tests have been run. That was not the desired outcome for that user as they were running tests from IPython Notebook, which is essentially an enhanced interpreter. As such it was also terminated by sys.exit. That is, exit is called outside of the test rather than in the test. I'd assumed a function under test was calling exit.

Related

Why do I get an AttributeError when performing a unit test, on Jupiter Notebook?

To solve an exercise on Jupiter Notebook, I need to perform a unit test on a function that I called city_function
def city_function(city, country):
output = city.title() + ', ' + country.title()
return output
This function is stored in "city_functions.py". The code that performs the unit test is stored in "test_cities2.ipynb". And I tried the following code to do the unit test:
import unittest
from city_functions import city_function
class CityCountryTestCase(unittest.TestCase):
# Verify if city_function works
def test_city_country_function(self):
output = city_function('lisbon', 'portugal')
self.assertEqual(output, 'Lisbon, Portugal')
unittest.main()
And I got an AttributeError of the type: AttributeError: module 'main' has no attribute.
What can I do to solve this problem?
There is a good article, that describes your problem:
The reason is that unittest.main looks at sys.argv and first parameter is what started IPython or Jupyter, therefore the error about kernel connection file not being a valid attribute. Passing explicit list to unittest.main will prevent IPython and Jupyter look at sys.argv. Passing exit=False will prevent unittest.main to shutdown the kernell process
Your last line should be like this:
unittest.main(argv=['first-arg-is-ignored'], exit=False)

Python 3.x UnitTest module disable SystemExit stack trace output

I'm learning how to write unit tests using the unittest module and have jumped in to deep end with meta programming (I believe it's also known as monkey patching) but have a stack trace that is printing out during a failed assertion test.
<output cut for brevity>
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 203, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 135, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: SystemExit not raised
Seems like I should be getting this error as the test should fail but I would prefer it be a bit more presentable and exclude the whole stack trace along with the assertion error.
Here's the code that uses a context manager to check for SystemExit:
with self.assertRaises(SystemExit) as cm:
o_hue_user.getHueLoginAuthentication()
self.assertNotEqual(cm.exception.code, 0)
The getHueLoginAuthentication method does execute exit(1) upon realizing the user name or password are incorrect but I need to eliminate the stack trace being printed out.
BTW, I've searched this and other sites and cannot find an answer that seems to have a simple or complete solution.
Thanks!
Since I'm new to this forum, I'm not sure if this is the correct way to respond to the answer...
I can try to put in the key areas of code but I can't reveal too much code as I'm working for a financial institute and they have strict rules about sharing internal work.
To answer your question, this is the code that executes the exit():
authentication_fail_check = bool(re.search('Invalid username or password', r.text))
if (r.status_code != 200 or authentication_fail_check) :
self.o_logging_utility.logger.error("Hue Login failed...")
exit(1)
I use PyCharm to debug. The key point of this code is to return an unsuccessful execution so that I can stop execution if this error occurs. I don't think it's necessary to use a try block here but I would not think it would matter. What's your professional opinion?
When the assertion condition is met, I get a pass and no trace stack. All this appears to be telling me is, the assertion was not met. My assertion tests are working. All I want to do is get rid of the trace stack and just print the last line of: "AssertionError: SystemExit not raised"
How do I get rid of the stack trace and leave the last line of the output as feedback?
Thanks!
I meant to say thank you for the warm welcome as well Don.
BTW, I can post the test code as it is not a part of the main code base. This is my first meta programming (monkey patching) unit test and actually my second unit test. I'm still struggling with going through the trouble of building some code that will tell me I get a particular result that I know I will get anyway. In the example of a function that returns a false boolean for example, If I write code that says execute this code with these parameters and I know for a fact that these values will return false, then why build code that tells me it will return false?
I'm struggling with how to design good tests that won't tell me the blatantly obvious.
So far, all I have managed to do is use a unit test to tell me if, when I instantiate an object and execute a function, it tells me if the login was successful or not. I can change the inputs to cause it to fail. But I already know it will fail. If I understand unit tests correctly, when I test if a login is successful or not, it is more of an integration test rather than a unit test.
However, the problem is, this particular class that I'm testing gets its parameters from a configuration file and sets instance variables for the specific connection. In the test code, I have 2 sets of tests that represents a good login and a bad login. I know unit tests are more autonomous in that the function can be called with parameters and tested independently. However, this is not how this code works. So, I'm at a loss as to how to design an efficient and useful test.
This is the test code for the specific class:
import unittest
from HueUser import *
test_data = \
{
"bad_login": {"hue_protocol": "https",
"hue_server": "my.server.com",
"hue_service_port": "1111",
"hue_auth_url": "/accounts/login/?next=/",
"hue_user": "baduser",
"hue_pw": "badpassword"},
"good_login": {"hue_protocol": "https",
"hue_server": "my.server.com",
"hue_service_port": "1111",
"hue_auth_url": "/accounts/login/?next=/",
"hue_user": "mouser",
"hue_pw": "good password"}
}
def hue_test_template(*args):
def foo(self):
self.assert_hue_test(*args)
return foo
class TestHueUserAuthentication(unittest.TestCase):
def assert_hue_test(self,o_hue_user):
with self.assertRaises(SystemExit) as cm:
o_hue_user.getHueLoginAuthentication()
self.assertNotEqual(cm.exception.code, 0)
for behaviour, test_cases in test_data.items():
o_hue_user = HueUser()
for name in test_cases:
setattr(o_hue_user, name, test_cases[name])
test_name = "test_getHueLoginAuthentication_{0}".format(behaviour)
test_case = hue_test_template(o_hue_user)
setattr(TestHueUserAuthentication,test_name, test_case)
Let me know how to respond to Answers or if I should just edit my post???
Thanks!
Welcome to Stack Overflow, Robert. It would be really helpful if you included a full example so other people can help you find the problem.
With the information you've given, I would guess that getHueLoginAuthentication() isn't actually raising the error you think it is. Try using a debugger to follow what it's doing, or put a print statement in just before it calls exit().
Here's a full example that shows how assertRaises() works:
from unittest import TestCase
def foo():
exit(1)
def bar():
pass
class FooTest(TestCase):
def test_foo(self):
with self.assertRaises(SystemExit):
foo()
def test_bar(self):
with self.assertRaises(SystemExit):
bar()
Here's what happens when I run it:
$ python3.6 -m unittest scratch.py
F.
======================================================================
FAIL: test_bar (scratch.FooTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scratch.py", line 19, in test_bar
bar()
AssertionError: SystemExit not raised
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)

How to decide where Python debugger stops and which line is to be blamed?

Background:
I write Squish GUI tests in Python. I tried to make test code as Pythonic and DRY as I could and hence I moved all repeating code to separate classes / modules.
Problem definition:
test.verify or assert statement tells the debugger to stop at the very line where the statement is and that's in most cases the module with details of single test step. This line is shown in eclipse during manual run and output by automatic test in Jenkins.
To actually see what failed in test it would be far better to stop the debugger at the invocation point of the procedures with asserts inside. Then tester / GUI developer can spot what actions on GUI lead to a problem and what was checked.
Example:
test_abstract.py
class App():
def open_file(self, filename):
pass # example
def test_file_content(content):
# squish magic to get file content from textbox etc.
# ...
test.verify(content in textBoxText)
test_file_opening.py
def main():
app = App()
app.open_file('filename.txt')
app.test_file_content('lorem')
As the test fails on test.verify() invocation the debugger stops and directs to test_abstract.py file. It actually say nothing about test steps that lead to this test failure.
Is there a way to tell debugger to ignore direct place of test failure and make it show where the procedure with test was invoked. I'm looking for elegant way that would not need too much of code in generic test file itself.
Not ideal solution which works:
For now I'm not using test.verify inside of abstract modules and invoke this in the particular test case code. Generalized test functions return a tuple (test_result, test_descriptive_message_with error) which is unpacked with *:
def test_file_content(content):
# test code
return (result, 'Test failed because...')
and test case code contains:
test.verify(*test_file_content('lorem'))
which works fine, but the each and every test case code has to contain a lot of test.verify(*... and test developers have to remember about it. Not to mention that it looks wet... (not DRY).
Yes! If you have access to Squish 6 there is some new functionality to do exactly that. The fixateResultContext() function will cause all results to be rewritten such that they appear to originate at an ancestor frame. See Documentation.
If you are using python this can be wrapped into a handy context manager
def resultsReportedAtCallsite(ancestorLevel = 1):
class Ctx:
def __enter__(self):
test.fixateResultContext(ancestorLevel + 1)
def __exit__(self, exc_type, exc_value, traceback):
test.restoreResultContext()
return Ctx()
def libraryFunction():
with resultsReportedAtCallsite():
test.compare("Apples", "Oranges")
Any later call to libraryFunction() that fails will point at the line of code containing libraryFunction(), and not the test.compare() within.

Error when running Python parameterized test method

IDE: PyCharm Community Edition 3.1.1
Python: 2.7.6
I using DDT for test parameterization http://ddt.readthedocs.org/en/latest/example.html
I want to choose and run parameterized test method from test class in PyCharm -> see example:
from unittest import TestCase
from ddt import ddt, data
#ddt
class Test_parameterized(TestCase):
def test_print_value(self):
print 10
self.assertIsNotNone(10)
#data(10, 20, 30, 40)
def test_print_value_parametrized(self, value):
print value
self.assertIsNotNone(value)
When I navigate to the first test method test_print_value in code and hit ctrl+Shift+F10 (or use Run Unittest test_print... option from context menu)
then test is executed.
When I try the same with parameterized test I get error:
Test framework quit unexpectedly
And output contains:
/usr/bin/python2 /home/s/App/pycharm-community-3.1.1/helpers/pycharm/utrunner.py
/home/s/Documents/Py/first/fib/test_parametrized.py::Test_parameterized::test_print_value_parametrized true
Testing started at 10:35 AM ...
Traceback (most recent call last):
File "/home/s/App/pycharm-community-3.1.1/helpers/pycharm/utrunner.py", line 148, in <module>
testLoader.makeTest(getattr(testCaseClass, a[2]), testCaseClass))
AttributeError: 'TestLoader' object has no attribute 'makeTest'
Process finished with exit code 1
However when I run all tests in class (by navigating to test class name in code and using mentioned run test option) all parameterized and non parameterized tests are executed together without errors.
The problem is how to independently run prameterized method from the test class - workaround is putting one parameterized test per test class but it is rather messy solution.
Actually this is issue in PyCharm utrunner.py who runs unittests. If you are using DDT there is a wrapper #ddt and #data - it is responsible for creating separate tests for each data entry. In the background these tests have different names e.g.
#ddt
class MyTestClass(unittest.TestCase):
#data(1, 2)
def test_print(self, command):
print command
This would create tests named:
- test_print_1_1
- test_print_2_2
When you try to run one test from the class (Right Click -> Run 'Unittest test_print') PyCharm has a problem to load your tests print_1_1, print_2_2 as it is trying to load test_print test.
When you look at the code of utrunner.py:
if a[1] == "":
# test function, not method
all.addTest(testLoader.makeTest(getattr(module, a[2])))
else:
testCaseClass = getattr(module, a[1])
try:
all.addTest(testCaseClass(a[2]))
except:
# class is not a testcase inheritor
all.addTest(
testLoader.makeTest(getattr(testCaseClass, a[2]), testCaseClass))
and you will debug it you see that issue.
Ok. So my fix for that is to load proper tests from the class. It is just a workaround and it is not perfect, however as DDT is adding a TestCase as another method to the class it is hard to find a different way to detect right test cases than comparing by string. So instead of:
try:
all.addTest(testCaseClass(a[2]))
you can try to use:
try:
all_tests = testLoader.getTestCaseNames(getattr(module, a[1]))
for test in all_tests:
if test.startswith(a[2]):
if test.split(a[2])[1][1].isdigit():
all.addTest(testLoader.loadTestsFromName(test, getattr(module,a[1])))
Checking if digit is found after the main name is a workaround to exclude similar test cases:
test_print
test_print_another_case
But of course it would not exclude cases:
test_if_prints_1
test_if_prints_2
So in the worst case, if we haven't got a good name convention we will run similar tests, but in most cases it should just work for you.
When I ran into this error, it was because I had implemented an init function as follows:
def __init__(self):
super(ClassInheritingTestCase, self).__init__()
When I changed it to the following, it worked properly:
def __init__(self, *args, **kwargs):
super(ClassInheritingTestCase, self).__init__(*args, **kwargs)
The problem was caused by me not propagating the *args and **kwargs through properly.

Nose test script with command line arguments

I would like to be able to run a nose test script which accepts command line arguments. For example, something along the lines:
test.py
import nose, sys
def test():
# do something with the command line arguments
print sys.argv
if __name__ == '__main__':
nose.runmodule()
However, whenever I run this with a command line argument, I get an error:
$ python test.py arg
E
======================================================================
ERROR: Failure: ImportError (No module named arg)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py", line 368, in loadTestsFromName
module = resolve_name(addr.module)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/util.py", line 334, in resolve_name
module = __import__('.'.join(parts_copy))
ImportError: No module named arg
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Apparently, nose tries to do something with the arguments passed in sys.argv. Is there a way to make nose ignore those arguments?
Alright, I hate "why would you want to do that?" answers just as much as anyone, but I'm going to have to make one here. I hope you don't mind.
I'd argue that doing whatever you're wanting to do isn't within the scope of the framework nose. Nose is intended for automated tests. If you have to pass in command-line arguments for the test to pass, then it isn't automated. Now, what you can do is something like this:
import sys
class test_something(object):
def setUp(self):
sys.argv[1] = 'arg'
del sys.argv[2] # remember that -s is in sys.argv[2], see below
def test_method(self):
print sys.argv
If you run that, you get this output:
[~] nosetests test_something.py -s
['/usr/local/bin/nosetests', 'arg']
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
(Remember to pass in the -s flag if you want to see what goes on stdout)
However, I'd probably still recommend against that, as it's generally a bad idea to mess with global state in automated tests if you can avoid it. What I would likely do is adapt whatever code I'm wanting to test to take an argv list. Then, you can pass in whatever you want during testing and pass in sys.argv in production.
UPDATE:
The reason why I need to do it is
because I am testing multiple
implementations of the same library.
To test those implementations are
correct I use a single nose script,
that accepts as a command line
argument the library that it should
import for testing.
It sounds like you may want to try your hand at writing a nose plugin. It's pretty easy to do. Here are the latest docs.
You could use another means of getting stuff into your code:
import os
print os.getenv('KEY_THAT_MIGHT_EXIST', default_value)
Then just remember to set your environment before running nose.
I think that is a perfectly acceptable scenario. I also needed to do something similar in order to run the tests against different scenarios (dev, qa, prod, etc) and there I needed the right URLS and configurations for each environment.
The solution I found was to use the nose-testconfig plugin (link here). It is not exactly passing command line arguments, but creating a config file with all your parameters, and then passing this config file as argument when you execute your nose-tests.
The config file has the following format:
[group1]
env=qa
[urlConfig]
address=http://something
[dbConfig]
user=test
pass=test
And you can read the arguments using:
from testconfig import config
print(config['dbConfig']['user'])
For now I am using the following hack:
args = sys.argv[1:]
sys.argv = sys.argv[0:1]
which just reads the argument into a local variable, and then deletes all the additional arguments in sys.argv so that nose does not get confused by them.
Just running nose and passing in parameters will not work as nose will attempt to interpret the arguments as nose parameters so you get the problems you are seeing.
I do not think nose support parameter passing directly yet but this nose plug-in nose-testconfig Allows you to write tests like below:
from testconfig import config
def test_os_specific_code():
os_name = config['os']['type']
if os_name == 'nt':
pass # some nt specific tests
else:
pass # tests for any other os

Categories

Resources