How to skip a test in pytest *before* fixtures are computed - python

I have a fairly large test suite written with pytest which is meant to perform system tests on an application that includes communication between server side and client side. The tests have a huge fixture which is initialized to contain information about the server which is then used to create a client object and run the tests. The server side may support different feature sets and it is reflected via attributes that may or may not be present inside the server object initialized by the fixture.
Now, quite similarly to this question, I need to skip certain tests if the required attributes are not present in the server object.
The way we have been doing this so far is by adding a decorator to the tests which checks for the attributes and uses pytest.skip if they aren't there.
Example:
import functools
import pytest
def skip_if_not_feature(feature):
def _skip_if_not_feature(func):
#functools.wraps(func)
def wrapper(server, *args, **kwargs):
if not server.supports(feature):
pytest.skip("Server does not support {}".format(feature))
return func(server, *args, **kwargs)
return wrapper
return _skip_if_not_feature
#skip_if_not_feature("feature_A")
def test_feature_A(server, args...):
...
The problem arises when some of these tests have more fixtures, some of which have relatively time consuming setup, and due to how pytest works, the decorator code which skips them runs after the fixture setup, wasting precious time.
Example:
#skip_if_not_feature("sync_db")
def test_sync_db(server, really_slow_db_fixture, args...):
...
I'm looking to optimize the test suite to run faster by making these tests get skipped faster.
I can only think of two ways to do it:
Re-write the decorator not to use the server fixture, and make it run before the fixtures.
Run code which decides to skip the tests between the initialization of the server fixture and the initialization of the rest of the fixtures.
I'm having trouble figuring out if the parts in bold are possible, and how to do them if they are. I've already gone through pytest documentation and google / stackoverflow results for similar questions and came up with nothing.

You can add a custom marker with the feature name to your tests, and add a skip marker in pytest_collection_modifyitems if needed.
In this case, the test is skipped without loading the fixtures first.
conftest.py
def pytest_configure(config):
config.addinivalue_line(
"markers",
"feature: mark test with the needed feature"
)
def pytest_collection_modifyitems(config, items):
for item in items:
feature_mark = item.get_closest_marker("feature")
if feature_mark and feature_mark.args:
feature = feature_mark.args[0]
if not server.supports(feature):
item.add_marker("skip")
test_db.py
#pytest.mark.feature("sync_db")
def test_sync_db(server, really_slow_db_fixture, args...):
...

The problem at the current decorator is that after the decorator do it thing, the module contains a test functions which one or more of its arguemnt are features. Those are identified as feature by the pytest mechanism, and evaluated.
It all lies in your *args
What you can do is to make your decorator spit out a func(server) instead of func(server,*args, **kwargs) when he recognizes that it is going to skip this test.
This way the skipped function won't have other fixtures, hence they will not be evaluated.
In a matter of fact you can even return instead of func a simpler empty lambda: None, as it is not going to be tested anyway.

Related

QA - testing order is not right [duplicate]

How can I be sure of the unittest methods' order? Is the alphabetical or numeric prefixes the proper way?
class TestFoo(TestCase):
def test_1(self):
...
def test_2(self):
...
or
class TestFoo(TestCase):
def test_a(self):
...
def test_b(self):
...
You can disable it by setting sortTestMethodsUsing to None:
import unittest
unittest.TestLoader.sortTestMethodsUsing = None
For pure unit tests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state?
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, and then you are no longer testing the state, but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unit test" frameworks for component testing...so stop and ask yourself, am I doing unit testing or component testing?
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class), 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file.
In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from NumPy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent.
They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later.
If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
I half agree with the idea that tests shouldn't be ordered. In some cases it helps (it's easier, damn it!) to have them in order... after all, that's the reason for the 'unit' in UnitTest.
That said, one alternative is to use mock objects to mock out and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more information, check out Mock, which is part of the standard library now.
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, and then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
#classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
#classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
#classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
Why do you need a specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the database setup code or create testing data ad nauseam.)
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
Don't rely on the order. If they use some common state, like the filesystem or database, then you should create setUp and tearDown methods that get your environment into a testable state, and then clean up after the tests have run.
Each test should assume that the environment is as defined in setUp, and should make no further assumptions.
You should try the proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is truly awesome.
For example, if test case #1 from module A should depend on test case #3 from module B you CAN set this behaviour using the library.
Here is a simpler method that has the following advantages:
No need to create a custom TestCase class.
No need to decorate every test method.
Use the unittest standard load test protocol. See the Python docs here.
The idea is to go through all the test cases of the test suites given to the test loader protocol and create a new suite but with the tests ordered by their line number.
Here is the code:
import unittest
def load_ordered_tests(loader, standard_tests, pattern):
"""
Test loader that keeps the tests in the order they were declared in the class.
"""
ordered_cases = []
for test_suite in standard_tests:
ordered = []
for test_case in test_suite:
test_case_type = type(test_case)
method_name = test_case._testMethodName
testMethod = getattr(test_case, method_name)
line = testMethod.__code__.co_firstlineno
ordered.append( (line, test_case_type, method_name) )
ordered.sort()
for line, case_type, name in ordered:
ordered_cases.append(case_type(name))
return unittest.TestSuite(ordered_cases)
You can put this in a module named order_tests and then in each unittest Python file, declare the test loader like this:
from order_tests import load_ordered_tests
# This orders the tests to be run in the order they were declared.
# It uses the unittest load_tests protocol.
load_tests = load_ordered_tests
Note: the often suggested technique of setting the test sorter to None no longer works because Python now sorts the output of dir() and unittest uses dir() to find tests. So even though you have no sorting method, they still get sorted by Python itself!
From unittest — Unit testing framework
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set the order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out this Stack Overflow question for details.
There are scenarios where the order can be important and where setUp and Teardown come in as too limited. There's only one setUp and tearDown method, which is logical, but you can only put so much information in them until it gets unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
It might not be the best solution, but I often use one method that kicks off the actual tests:
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and verify" function that constructs the object and validates its assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
See the example of WidgetTestCase on Organizing test code. It says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
I have implemented a plugin, nosedep, for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments, this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs. hours).
A minimal example is:
def test_a:
pass
#depends(before=test_a)
def test_b:
pass
To ensure that test_b is always run before test_a.
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
It seems they are executed in alphabetical order by test name (using the comparison function between strings).
Since tests in a module are also only executed if they begin with "test", I put in a number to order the tests:
class LoginTest(unittest.TestCase):
def setUp(self):
driver.get("http://localhost:2200")
def tearDown(self):
# self.driver.close()
pass
def test1_check_at_right_page(self):
...
assert "Valor" in driver.page_source
def test2_login_a_manager(self):
...
submit_button.click()
assert "Home" in driver.title
def test3_is_manager(self):
...
Note that numbers are not necessarily alphabetical - "9" > "10" in the Python shell is True for instance. Consider using decimal strings with fixed 0 padding (this will avoid the aforementioned problem) such as "000", "001", ... "010"... "099", "100", ... "999".
Contrary to what was said here:
tests have to run in isolation (order must not matter for that)
and
ordering them is important because they describe what the system do and how the developer implements it.
In other words, each test brings you information of the system and the developer logic.
So if this information is not ordered it can make your code difficult to understand.
To randomise the order of test methods you can monkey patch the unittest.TestLoader.sortTestMethodsUsing attribute
if __name__ == '__main__':
import random
unittest.TestLoader.sortTestMethodsUsing = lambda self, a, b: random.choice([1, 0, -1])
unittest.main()
The same approach can be used to enforce whatever order you need.

How to assert a method has been called from another complex method in Python?

I am adding some tests to existing not so test friendly code, as title suggest, I need to test if the complex method actually calls another method, eg.
class SomeView(...):
def verify_permission(self, ...):
# some logic to verify permission
...
def get(self, ...):
# some codes here I am not interested in this test case
...
if some condition:
self.verify_permission(...)
# some other codes here I am not interested in this test case
...
I need to write some test cases to verify self.verify_permission is called when condition is met.
Do I need to mock all the way to the point of where self.verify_permission is executed? Or I need to refactor the def get() function to abstract out the code to become more test friendly?
There are a number of points made in the comments that I strongly disagree with, but to your actual question first.
This is a very common scenario. The suggested approach with the standard library's unittest package is to utilize the Mock.assert_called... methods.
I added some fake logic to your example code, just so that we can actually test it.
code.py
class SomeView:
def verify_permission(self, arg: str) -> None:
# some logic to verify permission
print(self, f"verify_permission({arg=}=")
def get(self, arg: int) -> int:
# some codes here I am not interested in this test case
...
some_condition = True if arg % 2 == 0 else False
...
if some_condition:
self.verify_permission(str(arg))
# some other codes here I am not interested in this test case
...
return arg * 2
test.py
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class SomeViewTestCase(TestCase):
def test_verify_permission(self) -> None:
...
#patch.object(code.SomeView, "verify_permission")
def test_get(self, mock_verify_permission: MagicMock) -> None:
obj = code.SomeView()
# Odd `arg`:
arg, expected_output = 3, 6
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_not_called()
# Even `arg`:
arg, expected_output = 2, 4
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_called_once_with(str(arg))
You use a patch variant as a decorator to inject a MagicMock instance to replace the actual verify_permission method for the duration of the entire test method. In this example that method has no return value, just a side effect (the print). Thus, we just need to check if it was called under the correct conditions.
In the example, the condition depends directly on the arg passed to get, but this will obviously be different in your actual use case. But this can always be adapted. Since the fake example of get has exactly two branches, the test method calls it twice to traverse both of them.
When doing unit tests, you should always isolate the unit (i.e. function) under testing from all your other functions. That means, if your get method calls other methods of SomeView or any other functions you wrote yourself, those should be mocked out during test_get.
You want your test of get to be completely agnostic to the logic inside verify_permission or any other of your functions used inside get. Those are tested separately. You assume they work "as advertised" for the duration of test_get and by replacing them with Mock instances you control exactly how they behave in relation to get.
Note that the point about mocking out "network requests" and the like is completely unrelated. That is an entirely different but equally valid use of mocking.
Basically, you 1.) always mock your own functions and 2.) usually mock external/built-in functions with side effects (like e.g. network or disk I/O). That is it.
Also, writing tests for existing code absolutely has value. Of course it is better to write tests alongside your code. But sometimes you are just put in charge of maintaining a bunch of existing code that has no tests. If you want/can/are allowed to, you can refactor the existing code and write your tests in sync with that. But if not, it is still better to add tests retroactively than to have no tests at all for that code.
And if you write your unit tests properly, they still do their job, if you or someone else later decides to change something about the code. If the change breaks your tests, you'll notice.
As for the exception hack to interrupt the tested method early... Sure, if you want. It's lazy and calls into question the whole point of writing tests, but you do you.
No, seriously, that is a horrible approach. Why on earth would you test just part of a function? If you are already writing a test for it, you may as well cover it to the end. And if it is so complex that it has dozens of branches and/or calls 10 or 20 other custom functions, then yes, you should definitely refactor it.

Can I run unittest / pytest with python Optimization on?

I just added a few assert statements to the constructor of a class.
This has had the immediate effect of making about 10 tests fail.
Rather than fiddle with those tests I'd just like pytest to run the application code (not the test code obviously) with Python's Optimization switched on (-O switch, which means the asserts are all ignored). But looking at the docs and searching I can't find a way to do this.
I'm slightly wondering whether this might be bad practice, as arguably the time to see whether asserts fail may be during testing.
On the other hand, another thought is that you might have certain tests (integration tests, etc.) which should be run without optimisation, so that the asserts take effect, and other tests where you are being less scrupulous about the objects you are creating, where it might be justifiable to ignore the asserts.
asserts obviously qualify as "part of testing"... I'd like to add more to some of my constructors and other methods, typically to check parameters, but without making hundreds of tests fail, or have to become much more complicated.
The best way in this case would be to move all assert statements inside your test code. Maybe even switch to https://pytest.org/ as it is already using assert for test evaluation.
I'm assuming you can't in fact do this.
Florin and chepner have both made me wonder whether and to what extent this is desirable. But one can imagine various ways of simulating something like this, for example a Verifier class:
class ProjectFile():
def __init__(self, project, file_path, project_file_dict=None):
self.file_path = file_path
self.project_file_dict = project_file_dict
if __debug__:
Verifier.check(self, inspect.stack()[0][3]) # gives name of method we're in
class Verifier():
#staticmethod
def check(object, method, *args, **kwargs):
print(f'object {object} method {method}')
if type(object) == ProjectFile:
project_file = object
if method == '__init__':
# run some real-world checks, etc.:
assert project_file.file_path.is_file()
assert project_file.file_path.suffix.lower() == '.docx'
assert isinstance(project_file.file_path, pathlib.Path)
if project_file.project_file_dict != None:
assert isinstance(project_file.project_file_dict, dict)
Then you can patch out the Verifier.check method easily enough in the testing code:
def do_nothing(*args, **kwargs):
pass
verifier_class.Verifier.check = do_nothing
... so you don't even have to clutter your methods up with another fixture or whatever. Obviously you can do this on a module-by-module basis so, as I said, some modules might choose not to do this (integration tests, etc.)

How to test if a a method of main class is called from its wrapper

So i have a python class say
class Nested(object):
def method_test(self):
#do_something
The above class is being maintained by some other group so i can't change it. Hence we have a wrapper around it such that
class NestedWrapper(object):
self.nested = Nested()
def call_nested(object):
self.nested.method_test()
Now, I am writing test cases to test my NestedWrapper. How can i test that in one of my test, the underlying method of Nested.method_test is being called? Is it even possible?
I am using python Mock for testing.
UPDATE I guess I was implicitly implying that I want to do Unit Testing not a one off testing. Since most of the responses are suggesting me to use debugger, I just want to point out that I want it to be unit tested.
I think you can just mock Nested.method_test and make sure it was called...
with mock.patch.object(Nested, 'method_test') as method_test_mock:
nw = NestedWrapper()
nw.call_nested()
method_test_mock.called # Should be `True`
If using unittest, you could do something like self.assertTrue(method_test_mock.called), or you could make a more detailed assertion by calling one of the more specific assertions on Mock objects.

Is it possible to pass parameters to a fixture?

I am not talking about the "Fixture Parametrizing" as defined by pytest, I am talking about real parameters that you pass to a function (the fixture function in this case) to make code more modular.
To demonstrate, this is my fixture
#yield_fixture
def a_fixture(a_dependency):
do_setup_work()
yield
do_teardown_work()
a_dependency.teardown()
As you see, my fixture depends on a_dependency whose teardown() needs to be called as well. I know in the naive use-case, I could do this:
#yield_fixture
def a_dependency():
yield
teardown()
#yield_fixture
def a_fixture(a_dependency):
do_setup_work()
yield
do_teardown_work()
However, while the a_fixture code can be put in a central place and re-used by all tests, the a_dependecy code is test-specific and each test possibly needs to create a new a_dependency object.
I want to avoid copy-pasting both fixture and dependency to all my tests. If this was regular python code, I could just pass the a_dependecy as a function argument. How can I pass this object to my shared fixture?
It seems to me like maybe you don't want a_dependency to be a fixture, you just want it to be a regular function. Are you after something like this?
def a_dependency():
# returns a context manager
#yield_fixture
def a_fixture():
with a_dependency() as dependency:
do_setup_work()
yield
do_teardown_work()
OK, well if a_dependency really has to be a fixture, why not the best of both worlds? Decorators are just syntactic sugar after all.
def a_dependency():
# returns a context manager
a_dependency_fixture = yield_fixture(a_dependency)
#yield_fixture
def a_fixture():
# here use a_dependency as a regular function
with a_dependency() as dependency:
do_setup_work()
yield
do_teardown_work()
def test_foo(a_dependency_fixture):
# here use a_dependency as a fixture
pass
I haven't checked that this actually works because the information in the question is too generic for me to make a working case out of. It may be easier to give a more useful answer if you can provide more specifics.

Categories

Resources