Is there anything in Python similar to AssemblyInitialize in C# - python

I am new to Python Test Framework. Is there any concept in Python Testing, similar to AssemblyInitialize
concept in c#. I need to run two methods, before every test method runs. I'm aware of setup and teardown methods. I'm looking specifically something similar to AssemblyInitialize in c#.
Thank you.
I googled over the internet to look for anything similar to AssemblyInitialize in C#. Nothing popped up.

Because a python module is loaded only once, you can simply do like this, although it's an undocumented method.
# test_xxx.py
def init_test()
# Do whatever you want.
pass
init_test()
A documented method is to use fixture() like this.
import pytest
#pytest.fixture(scope='session', autouse=True)
def init_test():
# Do whatever you want.
pass

Related

Does Python's unittest have a global setUp for an entire TestSuite?

I'm a bit new to Python's unittest library and I'm currently looking at setting up my Flask server before running any of my integration tests. I know that the unittest.TestCase class allows you to use setUp() before every test cases in the class. I also know that the same class has another method called setUpClass() that runs only once for the entire class.
What I'm actually interested is trying to figure out how to do something similar like setUpClass(), but done on an entire unittest.TestSuite. However, I'm having no luck at it.
Sure, I could set up the server for every TestCase, but I would like to avoid doing this.
There is an answer on a separate question that suggests that by overriding unittest.TestResult's startTestRun(), you could have a set up function that covers the entire test suite. However, I've tried to passed in the custom TestResult object into unittest. TextTestRunner with no success.
So, how exactly can I do a set up for an entire test suite?
This is not well documented, but I recently needed to do this as well.
The docs mention that TestResult.startTestRun is "Called once before any tests are executed."
As you can see, in the implementation, the method doesn't do anything.
I tried subclassing TestResult and all kinds of things. I couldn't make any of that work, so I ended up monkey patching the class.
In the __init__.py of my test package, I did the following:
import unittest
OLD_TEST_RUN = unittest.result.TestResult.startTestRun
def startTestRun(self):
# whatever custom code you want to run exactly once before
# any of your tests runs goes here:
...
# just in case future versions do something in this method
# we'll call the existing method
OLD_TEST_RUN(self)
unittest.result.TestResult.startTestRun = startTestRun
There is also a stopTestRun method if you need to run cleanup code after all tests have run.
Note that this does not make a separate version of TestResult. The existing one is used by the unittest module as usual. The only thing we've done is surgically graft on our custom implementation of startTestRun

Where do you put a decorator after you've written it?

I am trying my hand at writing my first decorator. I have seen lots of video tutorials and blog posts about it, but I have not seen anything about where to put the decorator so that the function being decorated can find it. That seems pretty basic, and maybe it is 'obvious' to those who have done it before, but since that's not me, I am asking. Note I am not asking where to put #decorator. I know that goes on the line above the function being decorated. But the decorator itself has to be written and put somewhere where the # syntax can find and apply it. All the examples I've seen have them both in the same file or script, but I have never seen a decorator like that in actual practice, nor have I seen an import statement that brings it into the app. So where should it be, and how does the decorated app / python find it?
A decorator is just a callable, usually a function. It means you can define it anywhere you would like any other function and import it the same way.
from mydecorators import mydecorator
#mydecorator
def f():
...
Since decoration of a module-level class or function happens at import time, the name must be resolved before it used. This prohibits defining or importing the decorator after using it, which will be a NameError exception as usual.
As you probably know, the decorator is just a function just like any other function. So you place it just like you would place any other function
In the same file
In a different file. In this case you need to also import the function

Testing Python code that relies on hardware

So I have some code which uses gphoto2 to capture some images and stuff, I figured the best way to test this would be to wrap the gphoto2 code in something like an if TESTING: then return fake data, else do the gphoto2 stuff.
Does anyone know how I would achieve this, I've tried googling some things but I've not had any luck with specifically detecting if unit tests are being run or not.
I'd assume it would be something like if unittest: but maybe there is a better way to do this altogether?
EDIT:
So based on the comments and answers so far, I tried out the unittest.mock package, it didn't work as I'd hoped, let me explain.
If I have method A which calls the capture image method (method B), then saves the image and a few other bits. I've managed to mock method B so that it returns either the image or None, which works fine when I call method B specifically, but when I try to call method A, it doesn't use the mock of method B, it uses the actual method B.
How do I make method A use the mock method B?
The mock package exists for this very reason.
It's a standalone, pip-installable package for Python 2; it has been incorporated into the standard library for Python versions >= 3.3 (as unittest.mock).
Just use a mocking library from within your test code. This way you'd mask the external APIs (hardware calls in your case) and return predictable values.
I would recommend flexmock https://pypi.python.org/pypi/flexmock it's super easy.
In the beginning of your test code, you'll write something like:
flexmock(SomeObject).should_receive('some_method').and_return('some', 'values')

Unit testing an API wrapper

I'm writing a Python wrapper for an authenticated RESTful API. I'm writing up my test suite right now (Also first time test-writer here) but have a few questions:
1.a) How can I make a call, but not have to hardcode credentials into the tests since I'll be throwing it on Github?
1.b) I kind of know about mocking, but have no idea how to go about it. Would this allow me to not have to call the actual service? What would be the best way to go about this?
2) What do I test for - Just ensure that my methods are passing certains items in the dictionary?
3) Any best practices I should be following here?
Hey TJ if you can show me an example of one function that you are writing (code under test, not the test code) then I can give you an example test.
Generally though:
1.a You would mock the call to the external api, you are not trying to test whether their authentication mechanism, or your internet connection is working. You are just trying to test that you are calling their api with the correct signature.
1.b Mocking in Python is relatively straight forward. I generally use the mocking library written by Michael Foord. pip install mock will get you started. Then you can do things like
import unittest
from mock import call, patch
from my_module import wrapper_func
class ExternalApiTest(unittest.TestCase):
#patch('my_module.api_func')
def test_external_api_call(self, mocked_api_func):
response = wrapper_func('user', 'pass')
self.assertTrue(mocked_api_func.called)
self.assertEqual(
mocked_api_func.call_args_list,
[call('user', 'pass')]
)
self.assertEqual(mocked_api_func.return_value, response)
In this example we are replacing the api_func inside my_module with a mock object. The mock object records what has been done to it. It's important to remember where to patch. You don't patch the location you imported the object from. You patch it in the location that you will be using it.
You test that your code is doing the correct thing with a given input. Testing pure functions (pure in the functional programming sense) is pretty simple. You assert that given a input a, this function returns output b. It gets a bit trickier when your functions have lots of side effects.
If you are finding it too hard or complicated to test a certain functiob/method it can mean that it's a badly written piece of code. Try breaking it up into testable chunks and rather than passing objects into functions try to pass primitives where possible.

Python introspection: description of the parameters a function takes

Is there is a tool similar to dir() for modules that will tell me what parameters a given function takes? For instance, I would like to do something like dir(os.rename) and have it tell me what parameters are documented so that I can avoid checking the documentation online, and instead use only the Python scripting interface to do this.
I realize that you're more interested in help(thing) or thing.__doc__, but if you're trying to do programmatic introspection (instead of human-readable documentation) to find out about calling a function, then you can use the inspect module, as discussed in this question.
help(thing) pretty prints all the docstrings that are in the module, method, whatever ...

Categories

Resources