Load test data from file Pytest based on env - python

So I m loading test data from a different file based on the environment I'm meant to run the tests:
TestData/DevTestData.py contains:
data = {"accessToken": "Random Access Token"}
Then I have set up in conftest.py file:
To get the CLI parameter:
def pytest_addoption(parser):
parser.addoption('--environment', action='store')
Then to load the data I use LazySettings from simple-settings as a fixture:
#pytest.fixture
def testData(request):
return LazySettings("TestData." + request.config.getoption("--environment") + "TestData")
The test class looks like this:
class Test_User_Current():
userCurrentFacadeInstance = userCurrentGetAPI_Facade.User_Current_API_Facade()
def test_succesfull_request(self, environmentConfigs, testData):
self.userCurrentFacadeInstance.getRequest_user_current_API(environmentConfigs, testData).\
validateSuccessfullStatusCode().\
validateJsonContents()
CLI is:
py.test --environment Dev
My problem is, I have to pass "testData" for every test method rather then passing it to User_Current_API_Facade()'s constructor, and I cant do that for some reason, if I'm passing it to the constructor and not the test method (test_succesfull_request()) it does not work.
Do you guys have any idea on how to do this in a better way?

Related

Pass parameters from called function when mocking

I am new the usage of more "advanced" python. I decided to learn and implements unit testing for all my scripts.
Here is my issue :
I have a function from an external package I have made myself called gendiag. This package has a function "notify" that will send an email to a set recipient, defined in a config file, but leave the subject and the message as parameters :
gendiag.py :
import subprocess
#...
try:
configfile = BASE_PACKAGE_DIR.joinpath('config.yml')
with open(configfile, "r") as f:
config = yaml.safe_load(f)
except Exception as e:
print("Uh Oh!")
def notify(subject,message):
adress = config['mail']['nipt']
command = f'echo -e "{message}" | mail -s "{subject}" "{adress}"'
subprocess.run(command, shell=True)
In an other project called watercheck which import gendiag, I am using this function to get some info from every directory and send it as an email :
watercheck.py :
import gendiag as gdl
#...
def scan_water_metrics(indir_list):
for dir in indir_list:
#Do the things, this is a dummy example to simplify
some_info = os.path.basename(dir)
subject="Houlala"
message="I have parsed this directory, amazing.\n"
message += some_info
gdl.notify(subject,message)
Now in my test_watercheck.py, I would like to test that this function works with already created dummy data. But of course, I don't want to send an email to the rest of the world everytime I use pytest to see if email sending works. Instead, I was thinking I would create the following mock function in conftest.py :
conftest.py :
import gendiag
#pytest.fixture
def mock_scan_water_metrics():
mck = gendiag
mck.notify = mock.Mock(
return_value=subprocess.check_output(
"echo 'Hmm interesting' | mail -s Test my_own_email#gmule.com", shell=True
)
)
return mck
And then pass this mock to my test function in test_watercheck.py :
test_watercheck.py :
import gendiag
import pytest
from unittest import mock
from src import watercheck
def test_scan_water_metrics(mock_scan_water_metrics):
indir_list = ["tests/outdir/CORRECT_WATERCHECK","tests/outdir/BADWATER_WATERCHECK"]
water_check.scan_water_metrics(indir_list)
So this works in the sense that I am able to overwrite the email, but I would still like to test that some_info is collected properly, and for that I need to pass subject and message to the mock function. And this is the very confusing part for me. I don't doubt the answer is probably out there, but my understanding of the topic is too limited for me to find it out, or even formulate properly my question.
I have tried to read more about the object mock.Mock to see if I could collect the parameters somewhere, I have tried the following to see if I could access the parameters :
My attempt in conftest.py :
#pytest.fixture
#mock.patch("gendiag.notify_nipt")
def mock_scan_water_metrics(event):
print("Event : "+event)
args, kwargs = event.call_args
print("Args : "+args)
print("Kwargs : "+kwargs)
mck = gendiag
mck.notify = mock.Mock(
return_value=subprocess.check_output(
"echo 'Hmm interesting' | mail -s Test my_own_email#gmule.com", shell=True
)
)
return mck
I was hoping somewhere in args, I would find my two parameters, but when starting pytest, I have an error that the module "gendiag" does not exists, even though I had imported it everywhere just to be sure. I imagine the line causing it is the decorator here : #mock.patch("gendiag.notify_nipt"). I have tried with #mock.patch("gdl.notify_nipt") as well, as it is how it is called in the main function, with no success.
To be honest, I am really not sure where to go from here, it's getting too complex for me for now. How can I simply access to the parameters given to the function before it is decorated by pytest ?

Run pytest markers based on command line argument

I have a python file that reads from a configuration file and initializes certain variables, followed by a number of test cases, defined by pytest markers.
I run different set of test cases parallelly by calling these markers, like this - pytest -m "markername" -n 3
The problem now is, I don't have a single configuration file anymore. There are multiple configuration files and I need a way to get from command line during execution, which configuration file to use for the test cases.
What I tried?
I wrapped the reading of config file into a function with a conf argument.
I added a conftest.py file, added a command-line option conf using pytest addoption.
def pytest_addoption(parser):
parser.addoption("--conf", action="append", default=[],
help="Name of the configuration file to pass to test functions")
def pytest_generate_tests(metafunc):
if 'conf' in metafunc.fixturenames:
metafunc.parametrize("conf", metafunc.config.option.conf)
and then tried this pytest -q --conf="configABC" -m "markername", in the hope that I can read that configuration file to initialize certain parameters and pass it on to the test cases containing the given marker. But nothing ever happens, and I wonder... I wonder how... I wonder why..
If I run pytest -q --conf="configABC", the config file gets read, but all the test cases are running.
However, I only need to run specific test cases that use variables initialized through the config file I get from command line. And I want to use markers because I'm also using parameterization and running them in parallel. How will I get which configuration file to use, from the command line? Am I messing this up?
Edit 1:
#contents of testcases.py
import json
import pytest
...
...
...
def getconfig(conf):
config = open(str(conf)+'_Configuration.json', 'r')
data = config.read()
data_obj = json.loads(data)
globals()['ID'] = data_obj['Id']
globals()['Codes'] = data_obj['Codes'] # list [Code_1, Code_2, Code_3]
globals()['Uname'] = data_obj['IM_User']
globals()['Pwd'] = data_obj['IM_Password']
#return ID, Codes, User, Pwd
def test_parms():
#Returns a list of tuples [(ID, Code_1, Uname, Pwd), (ID, Code_2, Uname, Pwd), (ID, Code_3, Uname, Pwd)]
...
...
return l
#pytest.mark.testA
#pytest.mark.parametrize("ID, Code, Uname, Pwd", test_parms())
def testA(ID, Code, Uname, Pwd):
....
do something
....
#pytest.mark.testB
#pytest.mark.parametrize("ID, Code, Uname, Pwd", test_parms())
def testB(ID, Code, Uname, Pwd):
....
do something else
....
You seem to be on the right track, but miss some connections and details.
First, your option looks a bit strange - as far as I understand, you just need a string instead of a list:
conftest.py
def pytest_addoption(parser):
parser.addoption("--conf", action="store",
help="Name of the configuration file"
" to pass to test functions")
In your test code, you read the config file, and based on your code, it contains a json dictionary of parameter lists, e.g. something like:
{
"Id": [1, 2, 3],
"Codes": ["a", "b", "c"],
"IM_User": ["User1", "User2", "User3"],
"IM_Password": ["Pwd1", "Pwd2", "Pwd3"]
}
What you need for parametrization is a list of parameter tuples, and you also want to read the list only once. Here is an example implementation that reads the list on first access and stores it in a dictionary (provided your config file looks like shown above):
import json
configs = {}
def getconfig(conf):
if conf not in configs:
# read the configuration if not read yet
with open(conf + '_Configuration.json') as f:
data_obj = json.load(f)
ids = data_obj['Id']
codes = data_obj['Codes']
users = data_obj['IM_User']
passwords = data_obj['IM_Password']
# assume that all lists have the same length
config = list(zip(ids, codes, users, passwords))
configs[conf] = config
return configs[conf]
Now you can use these parameters to parametrize your tests:
def pytest_generate_tests(metafunc):
conf = metafunc.config.getoption("--conf")
# only parametrize tests with the correct parameters
if conf and metafunc.fixturenames == ["uid", "code", "name", "pwd"]:
metafunc.parametrize("uid, code, name, pwd", getconfig(conf))
#pytest.mark.testA
def test_a(uid, code, name, pwd):
print(uid, code, name, pwd)
#pytest.mark.testB
def test_b(uid, code, name, pwd):
print(uid, code, name, pwd)
def test_c():
pass
In this example, both test_a and test_b will be parametrized, but not test_c.
If you now run the test (with the json file name "ConfigA_Configuration.json"), you get something like:
$ python -m pytest -v --conf=ConfigA -m testB testcases.py
============================================ 6 passed, 2 warnings in 0.11s ============================================
(Py37_new) c:\dev\so\questions\so\params_from_config>python -m pytest -v --conf=ConfigA -m testB test_params_from_config.py
...
collected 7 items / 4 deselected / 3 selected
test_params_from_config.py::test_b[1-a-User1-Pwd1] PASSED
test_params_from_config.py::test_b[2-b-User2-Pwd2] PASSED
test_params_from_config.py::test_b[3-c-User3-Pwd3] PASSED

All my test functions are loading a fixture that is in the conftest.py, even when they don't need it

I have 2 different test files and some fixtures in my conftest.py:
1)"Test_dummy.py" which contains this function:
def test_nothing():
return 1
2)"Test_file.py". which contains this function:
def test_run(excelvalidation_io):
dfInput, expectedOutput=excelvalidation_io
output=run(dfInput)
for key, df in expectedOutput.items():
expected=df.fillna(0)
real=output[key].fillna(0)
assert expected.equals(real)
3)"conftest.py" which contains these fixtures:
def pytest_generate_tests(metafunc):
inputfiles=glob.glob(DATADIR+"**_input.csv", recursive=False)
iofiles=[(ifile, getoutput(ifile)) for ifile in
inputfiles]
metafunc.parametrize("csvio", iofiles)
#pytest.fixture
def excelvalidation_io(csvio):
dfInput, expectedOutput= csvio
return(dfInput, expectedOutput)
#pytest.fixture
def client():
client = app.test_client()
return client
When i run the tests, "Test_dummy.py" also tries to load the "excelvalidation_io" fixture and it generates error:
In test_nothing: function uses no argument 'csvio'
I have tried to place just the fixture inside the "Test_file.py" and the problem is solved, but i read that it's a good practice to locate all the fixtures in the conftest file.
The function pytest_generate_tests is a special function that is always called before executing any test, so in this case you need to check if the metafunc accepts an argument named "csvio" and do nothing otherwise, as in:
def pytest_generate_tests(metafunc):
if "excelvalidation_io" in metafunc.fixturenames:
inputfiles=glob.glob(DATADIR+"**_input.csv", recursive=False)
iofiles=[(ifile, getoutput(ifile)) for ifile in
inputfiles]
metafunc.parametrize("csvio", iofiles)
Source

passing a URL to test against with nose testconfig

is there a way to configure test-config so that i can pass a URL in command line i.e. 'nosetests --tc=systest_url test_suit.py'
I need to run my selenium tests against dev and systest environments when performing builds on teamcity. Our team decided to use python for UI tests and Im more a Java guy and I'm tring to figure out how the plugin works it looks like i can store the url in yaml and pass the file to the --tc command but doesn't seem to work
the code i inherited looks like this :
URL = config['test' : 'https://www.google.com', ]
class BaseTestCase(unittest.TestCase, Navigation):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Firefox()
cls.driver.implicitly_wait(5)
cls.driver.maximize_window()
cls.driver.get(URL)
which obviously is not working
There is a plugin for nose, the nosetest-config. You can specify some config file and pass filename to --tc-file arg from nose.
config.ini
[myapp_servers]
main_server = 10.1.1.1
secondary_server = 10.1.1.2
In your test file you can load the config.
test_app.py
from testconfig import config
def test_foo():
main_server = config['myapp_servers']['main_server']
Than, call nose with args
nosetests -s --tc-file example_cfg.ini
As described in docs, you can use other config files type like YAML, JSON or even Python modules.
Using the nose-testconfig --tc option you can override a configuration value. Separate the key from the value using a colon. For example,
nosetests test.py --tc=url:dev.example.com
will make the value available in config['url'].
from testconfig import config
def test_url_is_dev():
assert 'dev' in config['url']

How can I use a pytest fixture as a parametrize argument?

I have a pytest test like so:
email_two = generate_email_two()
#pytest.mark.parametrize('email', ['email_one#example.com', email_two])
def test_email_thing(self, email):
... # Run some test with the email parameter
Now, as part of refactoring, I have moved the line:
email_two = generate_email_two()
into its own fixture (in a conftest.py file), as it is used in various other places. However, is there some way, other than importing the fixture function directly, of referencing it in this test? I know that funcargs are normally the way of calling a fixture, but in this context, I am not inside a test function when I am wanting to call the fixture.
I ended up doing this by having a for loop in the test function, looping over each of the emails. Not the best way, in terms of test output, but it does the job.
import pytest
import fireblog.login as login
pytestmark = pytest.mark.usefixtures("test_with_one_theme")
class Test_groupfinder:
def test_success(self, persona_test_admin_login, pyramid_config):
emails = ['id5489746#mockmyid.com', persona_test_admin_login['email']]
for email in emails:
res = login.groupfinder(email)
assert res == ['g:admin']
def test_failure(self, pyramid_config):
fake_email = 'some_fake_address#example.com'
res = login.groupfinder(fake_email)
assert res == ['g:commenter']

Categories

Resources