I wrote some 'unittest' code, and it's great. I can see it on the CLI when I run it manually.
Now I want to hook it up to run automatically as part of a merge to master hook on my repository. I have everything set up with dummy code except for the part of grabbing the results of the unittest programmatically.
When I call unittest.main() to run them all, it throws a SystemExit. I've tried catching it and rerouting the standard output, but I wasn't able to get it to work, and it also feels like I'm doing it wrong. Is there an easier way to get the results of the unittests, like in a Python list of line strings, or even a more complicated result object?
Really for my purposes, I'm only interested in 100% pass or fail, and then showing that visual in the repository on a pull request to master, with a link to the full unittest result details.
I'm also not married to 'unittest' if some other Python unit test framework can be called and pass off results easily.
You can pass exit=False to unittest.main, and capture the return value. To run it from another script, or the interactive interpreter, you can specify a target module as well.
That gives us an instance of the TestCase or TestSuite class that was executed. The internal unittest machinery will make a TestResult object available in the result attribute of that object.
That object will have a TestResult.wasSuccessful method that gives the result you're looking for.
Assuming you have some file tests.py:
from unittest import main
test = main(module='tests', exit=False)
print(test.result.wasSuccessful())
Related
I want to write test functions for my code and decided to use pytest. I had a look into this tutorial: https://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest
My real code involves another script, written by me, so I made an example, which also creates the same problem, but does not rely on my other code.
#pytest.fixture()
def example():
value = 10
return value
def test_value(example):
print(example)
assert(example == 10)
test_value(example)
When I run my script with this toy example, the print returns a function:
<function example at 0x0391E540>
and the assertion fails.
If I try to call example() with the parenthesis, I get this:
Failed: Fixture "example_chunks" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I am sure, I am missing something important here, but searching google did not help me, which is why I hope somebody here can provide some assistance.
Remove this line from your script
test_value(example)
Run your script file with pytest file.py
Fixtures will be automatically resolved by pytest
In your example you run code directly and fixtures are just simple functions
I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?
(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something
Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.
Just run the test with --last-failed option (you might need pytest)
We run unit tests in Python that have previously been hard coded with information such as which server we want tests to run on. Instead, I'd like to pass that information to the test via command line argument. The problem is that using the Python unit testing framework, I'm stuck calling my custom parameters as a single parameter which is then caught by utrunner.py which assumes that the parameter is about which tests to run (regarding test discovery).
So running from IDEA I send out this command to start up the test suite:
C:\Users\glenp\AppData\Local\Programs\Python\Python36-32\python.exe C:\Users\glenp\.IntelliJIdea2016.3\config\plugins\python\helpers\pycharm\utrunner.py C:\Root\svn\trunk\src\test\python\test.py "server=deathStar language=klingon" true
This is the parameters that get read back to me from print(sys.argv):
['C:\\Users\\glenp\\.IntelliJIdea2016.3\\config\\plugins\\python\\helpers\\pycharm\\utrunner.py', 'C:\\Root\\svn\\trunk\\src\\test\\python\\schedulePollTest.py', 'server=deathStar language=klingon', 'true']
Note, I'm not actually calling my own test, I'm calling the utrunner.py with my test as one of the arguments to it.
I get a FileNotFound error: FileNotFoundError: [Errno 2] No such file or directory: 'server=deathStar language=klingon' which kills the test before I get to run it.
I think I need to modify either this:
if __name__ == "__main__":
unittest.main()
or this:
class testThatWontRun(unittest.TestCase):
I COULD modify imp.py, which is throwing the error, but I happen to be on a team and modifying core Python functionality isn't going to scale well at all. (And everyone on the team will be sad)
So, is there a way to phrase my arguments in a way that utrunner.py (and imp.py) will ignore those parameters?
Yes, there is a way to get the utrunner.py to ignore the parameters: put a -- in front of the parameter you want it to ignore.
so server=deathStar becomes --server=deathStar
Thank you rubber ducky :)
I have a program that interacts with and changes block devices (/dev/sda and such) on linux. I'm using various external commands (mostly commands from the fdisk and GNU fdisk packages) to control the devices. I have made a class that serves as the interface for most of the basic actions with block devices (for information like: What size is it? Where is it mounted? etc.)
Here is one such method querying the size of a partition:
def get_drive_size(device):
"""Returns the maximum size of the drive, in sectors.
:device the device identifier (/dev/sda and such)"""
query_proc = subprocess.Popen(["blockdev", "--getsz", device], stdout=subprocess.PIPE)
#blockdev returns the number of 512B blocks in a drive
output, error = query_proc.communicate()
exit_code = query_proc.returncode
if exit_code != 0:
raise Exception("Non-zero exit code", str(error, "utf-8")) #I have custom exceptions, this is slight pseudo-code
return int(output) #should always be valid
So this method accepts a block device path, and returns an integer. The tests will run as root, since this entire program will end up having to run as root anyway.
Should I try and test code such as these methods? If so, how? I could try and create and mount image files for each test, but this seems like a lot of overhead, and is probably error-prone itself. It expects block devices, so I cannot operate directly on image files in the file system.
I could try mocking, as some answers suggest, but this feels inadequate. It seems like I start to test the implementation of the method, if I mock the Popen object, rather than the output. Is this a correct assessment of proper unit-testing methodology in this case?
I am using python3 for this project, and I have not yet chosen a unit-testing framework. In the absence of other reasons, I will probably just use the default unittest framework included in Python.
You should look into the mock module (I think it's part of the unittest module now in Python 3).
It enables you to run tests without the need to depened in any external resources while giving you control over how the mocks interact with your code.
I would start from the docs in Voidspace
Here's an example:
import unittest2 as unittest
import mock
class GetDriveSizeTestSuite(unittest.TestCase):
#mock.patch('path/to/original/file.subprocess.Popen')
def test_a_scenario_with_mock_subprocess(self, mock_popen):
mock_popen.return_value.communicate.return_value = ('Expected_value', '')
mock_popen.return_value.returncode = '0'
self.assertEqual('expected_value', get_drive_size('some device'))
I'm new to python tests so don't hesitate to provide any obvious information.
Basically I want to do some RESTful tests using python, and found the httpretty and sure libraries which look really nice.
I have a python file containing:
#!/usr/bin/python
from sure import expect
import requests, httpretty
#httpretty.activate
def RestTest():
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"}
Which is basically the same as the example code provided at https://github.com/gabrielfalcao/HTTPretty
My question is; how do I simply run this test to see it either passing or failing? I tried just executing it using ./pythonFile but that doesn't work.
If your test is implemented as a Python function, then of course simply trying to execute the file isn't going to run the test: nothing in that file actually calls RestTest.
You need some sort of test framework that will call your tests and collate the results.
One such solution is python-nose, which will look for methods named test_* and run them. So if you were to rename RestTest to test_rest, you could run:
$ nosetests myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
The nosetests command has a variety of options that control which tests are run, how errors are handled and reported, and more.
Python 3 includes similar functionality in the unittest module, which is also available as a backport for Python 2 called unittest2. You could modify your code to take advantage of unittest like this:
#!/usr/bin/python
from sure import expect
import requests, httpretty
import unittest
class RestTest(unittest.TestCase):
#httpretty.activate
def test_rest(self):
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"})
if __name__ == '__main__':
unittest.main()
Running your file would now provide output similar to what we saw with
nosetests:
$ python myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
Have you tried calling your method?
Or does the annotation mean you don't have to explicitly call your method?
If I call your method, it seems like it works. If I change the value on one side of the expect, it complains properly about the values not matching.