I am new to unittesting and trying to write tests for my code. In my test file, I have defined a test class and have two small tests for my file. However, when I run the file, only the first test runs.
Furthermore, the code I am importing the tests from, in separate functions than the one I am testing, is generating some plots, and those plots are being generated before the test is ran now.
Is there a way I could only run the tests, without having the file I'm importing the functions to be tested from run?
This is my code for the test class :
import unittest
from Expense import set_day_of_the_week
from Expense import add_two_expenses
class Heart_Rate_Model_Tests(unittest.TestCase):
def test_assign_initial_values(self):
day_returned = set_day_of_the_week('Monday')
self.assertEqual(day_return, 'Monday')
def test_add_two_expense(self):
food_expense = 12
drink_expense = 5
expense_sum = add_two_expense(food_expense, drink_expense)
self.assertEqual(17, expense_sum)
if __name__ == '__main__':
unittest.main()
The output obtained from the tests is :
Ran 1 test in 0.001s
OK
I have made changes to the first test to make it fail, and it does fail, therefore I'm sure it's the only test that is being ran, but changes made to it are seen
Writing a new test file has fixed the issue. I'm still not sure what it was, I'm assuming it was a configuration error on my side, leaving this here if somebody in the future has a similar issue
For everybody who has the same problem in he future:
Check your compiling configurations. There might be just one specific test chosen when you click on 'run'.
Related
I want to write test functions for my code and decided to use pytest. I had a look into this tutorial: https://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest
My real code involves another script, written by me, so I made an example, which also creates the same problem, but does not rely on my other code.
#pytest.fixture()
def example():
value = 10
return value
def test_value(example):
print(example)
assert(example == 10)
test_value(example)
When I run my script with this toy example, the print returns a function:
<function example at 0x0391E540>
and the assertion fails.
If I try to call example() with the parenthesis, I get this:
Failed: Fixture "example_chunks" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I am sure, I am missing something important here, but searching google did not help me, which is why I hope somebody here can provide some assistance.
Remove this line from your script
test_value(example)
Run your script file with pytest file.py
Fixtures will be automatically resolved by pytest
In your example you run code directly and fixtures are just simple functions
I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?
(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something
Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.
Just run the test with --last-failed option (you might need pytest)
I'm new to python tests so don't hesitate to provide any obvious information.
Basically I want to do some RESTful tests using python, and found the httpretty and sure libraries which look really nice.
I have a python file containing:
#!/usr/bin/python
from sure import expect
import requests, httpretty
#httpretty.activate
def RestTest():
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"}
Which is basically the same as the example code provided at https://github.com/gabrielfalcao/HTTPretty
My question is; how do I simply run this test to see it either passing or failing? I tried just executing it using ./pythonFile but that doesn't work.
If your test is implemented as a Python function, then of course simply trying to execute the file isn't going to run the test: nothing in that file actually calls RestTest.
You need some sort of test framework that will call your tests and collate the results.
One such solution is python-nose, which will look for methods named test_* and run them. So if you were to rename RestTest to test_rest, you could run:
$ nosetests myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
The nosetests command has a variety of options that control which tests are run, how errors are handled and reported, and more.
Python 3 includes similar functionality in the unittest module, which is also available as a backport for Python 2 called unittest2. You could modify your code to take advantage of unittest like this:
#!/usr/bin/python
from sure import expect
import requests, httpretty
import unittest
class RestTest(unittest.TestCase):
#httpretty.activate
def test_rest(self):
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"})
if __name__ == '__main__':
unittest.main()
Running your file would now provide output similar to what we saw with
nosetests:
$ python myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
Have you tried calling your method?
Or does the annotation mean you don't have to explicitly call your method?
If I call your method, it seems like it works. If I change the value on one side of the expect, it complains properly about the values not matching.
I have a problem with PyCharm 3.0.1 I can't run basic unittests.
Here is my code :
import unittest from MysqlServer import MysqlServer
class MysqlServerTest(unittest.TestCase):
def setUp(self):
self.mysqlServer = MysqlServer("ip", "username", "password", "db", port)
def test_canConnect(self):
self.mysqlServer.connect()
self.fail()
if __name__ == '__main__':
unittest.main()
Here is All the stuff PyCharm give me
Unable to attach test reporter to test framework or test framework quit unexpectedly
It also says
AttributeError: class TestLoader has no attribute '__init__'
And the event log :
2:14:28 PM Empty test suite
The problem is when I run manually the Python file (with PyCharm, as a script)
Ran 1 tests in 0.019s
FAILED (failures=1)
Which is normal I make the test fail on purpose. I am a bit clueless on what is going on.
here more information :
Setting->Python Integrated Tools->Package requirements file: <PROJECT_HOME>/src/test
Default test runner: Unittests
pyunit 1.4.1 Is installed
EDIT: Same thing happen with the basic usage from unitests.py
import unittest
class IntegerArithmenticTestCase(unittest.TestCase):
def testAdd(self): ## test method names begin 'test*'
self.assertEquals((1 + 2), 3)
self.assertEquals(0 + 1, 1)
def testMultiply(self):
self.assertEquals((0 * 10), 0)
self.assertEquals((5 * 8), 40)
if __name__ == '__main__':
unittest.main()
Although this wasn't the case with the original poster, I'd like to note that another thing that will cause this are test functions that don't begin with the word 'test.'
class TestSet(unittest.TestCase):
def test_will_work(self):
pass
def will_not_work(self):
pass
This is probably because you did not set your testing framework correctly in the settings dialogue.
Definitely a pycharm thingy, repeating from above,
Run --> Edit Configurations.
select the instances of the test, and press the red minus button.
I have the exact same problem. It turned out that the fact of recognizing an individual test was related to the file name. In my case, test_calculate_kpi.py, which PyCharm didn't recognize as a test when renamed to test_calculate_kpis.py, was immediately recognized.
4 steps to generate html reports with PyCharm and most default test runners (py.test, nosetest, unittest etc.):
make sure you give your test methods a prefix 'test' (as stated before by others), e.g. def test_run1()
the widespread example code from test report packageās documentation is
import unittest
import HtmlTestRunner
class TestGoodnessOfFitTests(unittest.TestCase):
def test_run1(self):
...
if __name__ == '__main__':
unittest.main(testRunner=HtmlTestRunner.HTMLTestRunner(output='t.html'))
This code is usually located in a test class file which contains all the unittests and the "main"-catcher code. This code continuously yielded the warning Empty test suite for me. Therefore, remove all the if __name__ ... code. The file now only contains the TestGoodnessOfFitTests class. Now, additionally
create a new main.py in the same directory of the test class file and use the following code:
import unittest
import HtmlTestRunner
test_class = TestGoodnessOfFitTests()
unittest.main(module=test_class,
testRunner=HtmlTestRunner.HTMLTestRunner(output='t.html'))
Remove your old run configurations, right-click on your main.py and press Run 'main'. Verify correct settings under Preferences -> Python Integrated Tools -> Default test runner (in my case py.test and nose worked)
Output:
Running tests...
----------------------------------------------------------------------
test_gaussian_dummy_kolmogorov_cdf_1 (tests.evaluation_tests.TestGoodnessOfFitTests) ... OK (1.033877)s
----------------------------------------------------------------------
Ran 1 test in 0:00:01
OK
Generating HTML reports...
Even I had the same problem, I felt the workspace was not properly refreshed. Even I did File->Synchronize(Ctrl+Aly+y). But that wasn't the solution. I just renamed my test python file name and again I tried executing the code, it started worked fine.
I had the same problem. The file was named test_exercise_detectors.py (note the plural "detectors") and it was in a packed named test_exercise_detectors. Changing the name of the file to test_exercise_detector.py (singular "detector") fixed the problem.
Adding
if __name__ == "__main__":
unittest.main()
fixed the issue for me.
I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.