I have some common code which are used in all my PyTest. I want to keep the complete code inside any file and load it during run time. Is it possible using Python ?
Eg:
def teardown_method(self, method):
print "This is tear down method"
def setup_method(self, method):
print "This is setup method"
Apart from simply using import, for the example you showed (setup/teardown), you'd usually use a pytest fixture instead, and put them in a conftest.py to share them between tests.
Related
I ended up with testing an "automation" script in python, basically just a script calling bunch of functions.
I have 3 classes: class_A, class_B, class_C, with each class having a "run" function
Script.py is calling class_A.run(), class_B.run, class_C.run()
My question would be if is there is a way of unit testing Script.py to just assert if the 3 run functions we're called, without actually running (going trough their code) them.
I tried patching the class and i can get the correct assert, but the run functions are still "running" their code.
Is it possible to somehow Mock the class_A.run() and assert if was called?
You could use Mock patch.
The MockClass will replace your module.class_A and the run-method:
from unittest.mock import patch
#patch('module.class_A', 'run')
def test(MockClass):
Script.py # your testfunction
assert MockClass.run.called # check if module.Class_A.run() was called
Sorry for this question. It may be repetetive but did not find enough on google related to this. I know fixtures are having 4 scopes (function, module, class, session). Just want to know whether these scopes are valid only if I use autouse true?. For autouse=False, can i use any fixture having any scope in a test function
Like I am having a conftest.py file which is having fixture as
#pytest.fixture(scope="module")
def test_fixture():
DO something
print "Module level fixture am called"
test.py file is having below data
class ABC:
def test(test_fixture):
print "Doing something"
def test_2(test_fixture):
print "Executing second test"
Now If I change scope of fixture to session, it will still run and produce the same result.
Does that mean scoping only valid if autouse=True?
First of all, I needed to rename your class to TestABC, so that pytest can find it.
Secondly, I will assume that you are using Python 2, but remember that it will no longer be supported in pytest 5.0.
And what I think is your main issue: since your are using a class to group your tests, the first argument to your methods is always the instance itself (aka self) even if you give it another name.
That's why pytest is not really detecting that you want to pass the fixture and it never does it.
So try something like this:
class TestABC:
def test(self, test_fixture):
print "Doing something"
def test_2(self, test_fixture):
print "Executing second test"
Also note that you don't need to use classes to group your tests, you could use regular functions. So removing the class should also solve your problem:
def test(test_fixture):
print "Doing something"
def test_2(test_fixture):
print "Executing second test"
Also, in your case module and session will produce a similar effect, since you only have one module (at least in your example).
But you can easily compare with the default scope: if you remove the scope argument completely, the fixture will be called once before each test that requires it.
#pytest.fixture
def test_fixture():
# DO something
print "Default test level fixture"
Using module will call it once for the whole module. And session would call it once for all the modules.
To answer your original question: no, scopes also apply when autouse=False.
I cannot find a situation when pass does not follow immediately follow the module definition for the #scenario decorator.
#For example:
#scenario('myFileName.feature', 'my scenario title')
def my_scenario_module1():
pass
#given(blah blah)
blah blah actual code
...blah blah other decorators (ie: #when #then)
I understand that the scenario is tested in the #given, #when and #then. But what is the point of the pass after the #scenario?
Is there a purpose to the module for #scenario other than just writing pass every time?
I was wondering at some stage too. Here is the answer I got from pytest-bdd dev team:
https://github.com/pytest-dev/pytest-bdd/issues/279
pytest tests discovery
test_ prefixed test functions or methods outside of class
scenario decorator
Function decorated with scenario decorator behaves like a normal test function, which will be executed after all scenario steps. You can consider it as a normal pytest test function, e.g. order fixtures there, call other functions and make assertions
Some more details (scenarios shortcut)
...with the manual approach you get all the power to be able to additionally parametrize the test, give the test function a nice name, document it, etc...
I want to setup a test suite wherein I will read a json file in setup_class method and in that json file I will mention which tests should run and which tests should not run. So with this approach I can mention which test cases to run by altering the json file only and not touching the test suite.
But in the setup_class method when I try to do the following:-
class TestCPU:
testme=False
#classmethod
def setup_class(cls):
cls.test_core.testme=True
def test_core(self):
print 'Test CPU Core'
assert 1
Executing below command:-
nosetests -s -a testme
It gives following error:-
File "/home/murtuza/HWTestCert/testCPU.py", line 7, in setup_class
cls.test_core.testme=False
AttributeError: 'instancemethod' object has no attribute 'testme'
So, is it possible to set the attributes of test methods during setup_class?
The way it is defined, testme is a member of the TestCPU class, and the <unbound method TestCPU.test_core> has no idea about this attribute. You can inject nose attribute by using cls.test_core.__dict__['testme']=True. However, the attributes are checked before your setup_class method is called, so even though the attribute will be set, your test will be skipped. But you can certainly decorate your test with attributes on import, like this:
import unittest
class TestCPU(unittest.TestCase):
def test_core(self):
print 'Test CPU Core'
assert 1
TestCPU.test_core.__dict__['testme']=True
You may also want to try --pdb option to nosetests, it will bring out debugger on error so that you can dive in to see what is wrong. It is definitely my second favorite thing in life.
I am sure there are multiple ways to achieve this, but here is one way you can do it.
Inside your test class, create a method that reads in your JSON file and creates a global array for methods to be skipped for testing - skiptests
All you need to do now is use a setup decorator for every test case method in your suite. Within this decorator, check if the current function is in skiptests. If so, call nosetests' custom decorator nose.tools.nottest which is used to skip a test. Otherwise, return the function being tested.
To put it in code:
def setup_test_method1(func):
if func.__name__ not in skiptests:
return func
else:
return nose.tools.nottest(func)
#with_setup(setup_test_method1)
def test_method1():
pass
I have not tested this code, but I think we can invoke a decorator within another decorator. In which case this could work.
I'm writing test system using py.test, and looking for a way to make particular tests execution depending on some other test run results.
For example, we have standard test class:
import pytest
class Test_Smoke:
def test_A(self):
pass
def test_B(self):
pass
def test_C(self):
pass
test_C() should be executed if test_A() and test_B() were passed, else - skipped.
I need a way to do something like this on test or test class level (e.g. Test_Perfo executes, if all of Test_Smoke passed), and I'm unable to find a solution using standard methods (like #pytest.mark.skipif).
Is it possible at all with pytest?
You might want to have a look at pytest-dependency. It is a plugin that allows you to skip some tests if some other test had failed.
Consider also pytest-depends:
def test_build_exists():
assert os.path.exists(BUILD_PATH)
#pytest.mark.depends(on=['test_build_exists'])
def test_build_version():
# this will skip if `test_build_exists` fails...