I have a python file that reads from a configuration file and initializes certain variables, followed by a number of test cases, defined by pytest markers.
I run different set of test cases parallelly by calling these markers, like this - pytest -m "markername" -n 3
The problem now is, I don't have a single configuration file anymore. There are multiple configuration files and I need a way to get from command line during execution, which configuration file to use for the test cases.
What I tried?
I wrapped the reading of config file into a function with a conf argument.
I added a conftest.py file, added a command-line option conf using pytest addoption.
def pytest_addoption(parser):
parser.addoption("--conf", action="append", default=[],
help="Name of the configuration file to pass to test functions")
def pytest_generate_tests(metafunc):
if 'conf' in metafunc.fixturenames:
metafunc.parametrize("conf", metafunc.config.option.conf)
and then tried this pytest -q --conf="configABC" -m "markername", in the hope that I can read that configuration file to initialize certain parameters and pass it on to the test cases containing the given marker. But nothing ever happens, and I wonder... I wonder how... I wonder why..
If I run pytest -q --conf="configABC", the config file gets read, but all the test cases are running.
However, I only need to run specific test cases that use variables initialized through the config file I get from command line. And I want to use markers because I'm also using parameterization and running them in parallel. How will I get which configuration file to use, from the command line? Am I messing this up?
Edit 1:
#contents of testcases.py
import json
import pytest
...
...
...
def getconfig(conf):
config = open(str(conf)+'_Configuration.json', 'r')
data = config.read()
data_obj = json.loads(data)
globals()['ID'] = data_obj['Id']
globals()['Codes'] = data_obj['Codes'] # list [Code_1, Code_2, Code_3]
globals()['Uname'] = data_obj['IM_User']
globals()['Pwd'] = data_obj['IM_Password']
#return ID, Codes, User, Pwd
def test_parms():
#Returns a list of tuples [(ID, Code_1, Uname, Pwd), (ID, Code_2, Uname, Pwd), (ID, Code_3, Uname, Pwd)]
...
...
return l
#pytest.mark.testA
#pytest.mark.parametrize("ID, Code, Uname, Pwd", test_parms())
def testA(ID, Code, Uname, Pwd):
....
do something
....
#pytest.mark.testB
#pytest.mark.parametrize("ID, Code, Uname, Pwd", test_parms())
def testB(ID, Code, Uname, Pwd):
....
do something else
....
You seem to be on the right track, but miss some connections and details.
First, your option looks a bit strange - as far as I understand, you just need a string instead of a list:
conftest.py
def pytest_addoption(parser):
parser.addoption("--conf", action="store",
help="Name of the configuration file"
" to pass to test functions")
In your test code, you read the config file, and based on your code, it contains a json dictionary of parameter lists, e.g. something like:
{
"Id": [1, 2, 3],
"Codes": ["a", "b", "c"],
"IM_User": ["User1", "User2", "User3"],
"IM_Password": ["Pwd1", "Pwd2", "Pwd3"]
}
What you need for parametrization is a list of parameter tuples, and you also want to read the list only once. Here is an example implementation that reads the list on first access and stores it in a dictionary (provided your config file looks like shown above):
import json
configs = {}
def getconfig(conf):
if conf not in configs:
# read the configuration if not read yet
with open(conf + '_Configuration.json') as f:
data_obj = json.load(f)
ids = data_obj['Id']
codes = data_obj['Codes']
users = data_obj['IM_User']
passwords = data_obj['IM_Password']
# assume that all lists have the same length
config = list(zip(ids, codes, users, passwords))
configs[conf] = config
return configs[conf]
Now you can use these parameters to parametrize your tests:
def pytest_generate_tests(metafunc):
conf = metafunc.config.getoption("--conf")
# only parametrize tests with the correct parameters
if conf and metafunc.fixturenames == ["uid", "code", "name", "pwd"]:
metafunc.parametrize("uid, code, name, pwd", getconfig(conf))
#pytest.mark.testA
def test_a(uid, code, name, pwd):
print(uid, code, name, pwd)
#pytest.mark.testB
def test_b(uid, code, name, pwd):
print(uid, code, name, pwd)
def test_c():
pass
In this example, both test_a and test_b will be parametrized, but not test_c.
If you now run the test (with the json file name "ConfigA_Configuration.json"), you get something like:
$ python -m pytest -v --conf=ConfigA -m testB testcases.py
============================================ 6 passed, 2 warnings in 0.11s ============================================
(Py37_new) c:\dev\so\questions\so\params_from_config>python -m pytest -v --conf=ConfigA -m testB test_params_from_config.py
...
collected 7 items / 4 deselected / 3 selected
test_params_from_config.py::test_b[1-a-User1-Pwd1] PASSED
test_params_from_config.py::test_b[2-b-User2-Pwd2] PASSED
test_params_from_config.py::test_b[3-c-User3-Pwd3] PASSED
Related
So I m loading test data from a different file based on the environment I'm meant to run the tests:
TestData/DevTestData.py contains:
data = {"accessToken": "Random Access Token"}
Then I have set up in conftest.py file:
To get the CLI parameter:
def pytest_addoption(parser):
parser.addoption('--environment', action='store')
Then to load the data I use LazySettings from simple-settings as a fixture:
#pytest.fixture
def testData(request):
return LazySettings("TestData." + request.config.getoption("--environment") + "TestData")
The test class looks like this:
class Test_User_Current():
userCurrentFacadeInstance = userCurrentGetAPI_Facade.User_Current_API_Facade()
def test_succesfull_request(self, environmentConfigs, testData):
self.userCurrentFacadeInstance.getRequest_user_current_API(environmentConfigs, testData).\
validateSuccessfullStatusCode().\
validateJsonContents()
CLI is:
py.test --environment Dev
My problem is, I have to pass "testData" for every test method rather then passing it to User_Current_API_Facade()'s constructor, and I cant do that for some reason, if I'm passing it to the constructor and not the test method (test_succesfull_request()) it does not work.
Do you guys have any idea on how to do this in a better way?
I am new the usage of more "advanced" python. I decided to learn and implements unit testing for all my scripts.
Here is my issue :
I have a function from an external package I have made myself called gendiag. This package has a function "notify" that will send an email to a set recipient, defined in a config file, but leave the subject and the message as parameters :
gendiag.py :
import subprocess
#...
try:
configfile = BASE_PACKAGE_DIR.joinpath('config.yml')
with open(configfile, "r") as f:
config = yaml.safe_load(f)
except Exception as e:
print("Uh Oh!")
def notify(subject,message):
adress = config['mail']['nipt']
command = f'echo -e "{message}" | mail -s "{subject}" "{adress}"'
subprocess.run(command, shell=True)
In an other project called watercheck which import gendiag, I am using this function to get some info from every directory and send it as an email :
watercheck.py :
import gendiag as gdl
#...
def scan_water_metrics(indir_list):
for dir in indir_list:
#Do the things, this is a dummy example to simplify
some_info = os.path.basename(dir)
subject="Houlala"
message="I have parsed this directory, amazing.\n"
message += some_info
gdl.notify(subject,message)
Now in my test_watercheck.py, I would like to test that this function works with already created dummy data. But of course, I don't want to send an email to the rest of the world everytime I use pytest to see if email sending works. Instead, I was thinking I would create the following mock function in conftest.py :
conftest.py :
import gendiag
#pytest.fixture
def mock_scan_water_metrics():
mck = gendiag
mck.notify = mock.Mock(
return_value=subprocess.check_output(
"echo 'Hmm interesting' | mail -s Test my_own_email#gmule.com", shell=True
)
)
return mck
And then pass this mock to my test function in test_watercheck.py :
test_watercheck.py :
import gendiag
import pytest
from unittest import mock
from src import watercheck
def test_scan_water_metrics(mock_scan_water_metrics):
indir_list = ["tests/outdir/CORRECT_WATERCHECK","tests/outdir/BADWATER_WATERCHECK"]
water_check.scan_water_metrics(indir_list)
So this works in the sense that I am able to overwrite the email, but I would still like to test that some_info is collected properly, and for that I need to pass subject and message to the mock function. And this is the very confusing part for me. I don't doubt the answer is probably out there, but my understanding of the topic is too limited for me to find it out, or even formulate properly my question.
I have tried to read more about the object mock.Mock to see if I could collect the parameters somewhere, I have tried the following to see if I could access the parameters :
My attempt in conftest.py :
#pytest.fixture
#mock.patch("gendiag.notify_nipt")
def mock_scan_water_metrics(event):
print("Event : "+event)
args, kwargs = event.call_args
print("Args : "+args)
print("Kwargs : "+kwargs)
mck = gendiag
mck.notify = mock.Mock(
return_value=subprocess.check_output(
"echo 'Hmm interesting' | mail -s Test my_own_email#gmule.com", shell=True
)
)
return mck
I was hoping somewhere in args, I would find my two parameters, but when starting pytest, I have an error that the module "gendiag" does not exists, even though I had imported it everywhere just to be sure. I imagine the line causing it is the decorator here : #mock.patch("gendiag.notify_nipt"). I have tried with #mock.patch("gdl.notify_nipt") as well, as it is how it is called in the main function, with no success.
To be honest, I am really not sure where to go from here, it's getting too complex for me for now. How can I simply access to the parameters given to the function before it is decorated by pytest ?
Say I have this test in tests.py
def test_user(username='defaultuser'):
Case 1
I want to pass the username to test from the command line, something like
$ pytest tests.py::test_user user1 # could be --username=user1
How do I do that?
Case 2
I want to pass a list of usernames to test, like
$ pytest tests.py::test_user "user1, user2, user3"
I want to achieve something like
#pytest.mark.parametrize("username", tokenize_and_validate(external_param))
def test_user(username):
pass
def tokenize_and_validate(val):
if not val:
return 'defaultuser'
return val.split(',')
How can I do that?
Thank you
When you pass a parameter from the command line at first you need to create a generator method to get the value from the command line this method run every test.
def pytest_generate_tests(metafunc):
# This is called for every test. Only get/set command line arguments
# if the argument is specified in the list of test "fixturenames".
option_value = metafunc.config.option.name
if 'name' in metafunc.fixturenames and option_value is not None:
metafunc.parametrize("name", [option_value])
Then you can run from the command line with a command line argument:
pytest -s tests/my_test_module.py --name abc
Follow the link for more details
To mock the data, you can use fixtures or use builtin unittest mocks.
from unittest import mock
#mock.patch(func_to_mock, side_effect=func_to_replace)
def test_sth(*args):
pass
Command line options are also available.
I have a test suite with a conftest.py defining some options and some fixtures to retrieve them:
def pytest_addoption(parser):
parser.addoption("--ip", action="store")
parser.addoption("--port", action="store")
#pytest.fixture
def ip(request):
return request.config.getoption("ip")
#pytest.fixture
def port(request):
return request.config.getoption("ip")
(I slipped in a copy-paste error to make a point)
My tests can very eloquently express the options they need:
def test_can_ping(ip):
...
def test_can_net_cat(ip, port):
...
But ...
I'm trying to avoid duplicating myself here: I have to specify the name of the config parameter in three places to make it work.
I could have avoided the copy-paste error if I had something that looked like this:
# does not exist:
#pytest.option_fixture
def ip(request, parser):
return request.config.getoption(this_function_name)
or this
def pytest_addoption(parser):
# does not exist: an as_fixture parameter
parser.addoption("--ip", action="store", as_fixture=True)
parser.addoption("--port", action="store", as_fixture=True)
Is there a way to tell pytest to add an option and a corresponding
fixture to achieve DRY/SPOT code?
After some tests, I came to something working. It is probably not the best way to do it but it is quite satisfying I think.
All code below have been added to the conftest.py module, except the two tests.
First define a dictionary containing the options data:
options = {
'port': {'action': 'store', 'help': 'TCP port', 'type': int},
'ip': {'action': 'store', 'help': 'IP address', 'type': str},
}
We could do without help and type, but it will have a certain utility later.
Then you can use this options to create the pytest options:
def pytest_addoption(parser):
for option, config in options.items():
parser.addoption(f'--{option}', **config)
At this point, pytest --help gives this (note the help data usage which provides convenient doc):
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
...
custom options:
--port=PORT TCP port
--ip=IP IP address
Finally we have to define the fixtures. I did this by providing a make_fixture function which is used in a loop at conftest.py reading to dynamically create fixtures and add them to the global scope of the module:
def make_fixture(option, config):
func = lambda request: request.config.getoption(option)
func.__doc__ = config['help']
globals()[option] = pytest.fixture()(func)
for option, config in options.items():
make_fixture(option, config)
Again, the 'help' data is used to build a docstring to the created fixtures and document them. Thus, invoking pytest --fixtures prints this:
...
---- fixtures defined from conftest ----
ip
IP address
port
TCP port
Invoking pytest --port 80 --ip 127.0.0.1, with the two following very simple tests, seems to validate the trick (Here the type data shows its utility, it has made pytest convert the port to an int, instead of a string):
def test_ip(ip):
assert ip == '127.0.0.1'
def test_ip_port(ip, port):
assert ip == '127.0.0.1'
assert port == 80
(Very interesting question, I would like to see more like this one)
Instead of changing the pytest decorators, create one of your own:
parse_options = []
#addOption(parse_options)
#pytest
def ip(...): ...
A decorator doesn't have to modify the function which is passed in. So in this case, look at the method object, use the f.__name__ to get the name and add an entry in the list parse_options for it.
The next step is to modify pytest_addoption to iterate over the list and create the options. At the time when the function is being executed, the decorators should have done their work.
I am using pytest to do software testing lately but am coming across a problem when dynamically parameterizing test fixtures. When testing, I would like to be able to provide the option to:
A) Test a specific file by specifying its file name
B) Test all files in the installed root directory
Below is my current conftest.py. What I want it to do is if you choose option A (--file_name), create a parameterized test fixture using the file name specified. If you choose option B (--all_files), provide a list of all the files as a parameterized test fixture.
import os
import pytest
def pytest_addoption(parser):
parser.addoption("--file_name", action="store", default=[], help="Specify file-under-test")
parser.addoption("--all_files", action="store_true", help="Option to test all files root directory")
#pytest.fixture(scope='module')
def file_name(request):
return request.config.getoption('--file_name')
def pytest_generate_tests(metafunc):
if 'file_name' in metafunc.fixturenames:
if metafunc.config.option.all_files:
all_files = list_all_files()
else:
all_files = "?"
metafunc.parametrize("file_name", all_files)
def list_all_files():
root_directory = '/opt/'
if os.listdir(root_directory):
# files have .cool extension that need to be split out
return [name.split(".cool")[0] for name in os.listdir(root_directory)
if os.path.isdir(os.path.join(root_directory, name))]
else:
print "No .cool files found in {}".format(root_directory)
The more I fiddle with this, I only can get one of the options working but not the other...what do I need to do to get both options (and possibly more) dynamically create parametrized test fixtures?
Are you looking for something like this?
def pytest_generate_tests(metafunc):
if 'file_name' in metafunc.fixturenames:
files = []
if metafunc.config.option.all_files:
files = list_all_files()
fn = metafunc.config.option.file_name
if fn:
files.append(fn)
metafunc.parametrize('file_name', all_files, scope='module')
No need to define file_name function.