Okay so I have this fixture:
#pytest.fixture
def dummy_name():
def func(name="Dummy Name"):
num = 2
while True:
yield name
if num > 2:
tags = name.rsplit("_", 1)
name = f"{tags[0]}_{num}"
else:
name += f"_{num}"
num += 1
return func
#pytest.fixture
def dummy_admin_name(dummy_name):
return dummy_name(name="Dummy Admin")
I can use it like this:
def some_function(dummy_admin_name):
print(next(dummy_admin_name))
print(next(dummy_admin_name))
print(next(dummy_admin_name))
Which returns:
Dummy Admin
Dummy Admin_2
Dummy Admin_3
But the problem is I have to create that secondary fixture all the time for different strings, because calling the the base dummy_name fixture doesn't work:
def some_function(dummy_name):
print(next(dummy_name(name="Dummy Admin")))
print(next(dummy_name(name="Dummy Admin")))
print(next(dummy_name(name="Dummy Admin")))
Which returns:
Dummy Admin
Dummy Admin
Dummy Admin
Clearly because adding the argument three times resets the name value every time.
How could I make that input persistent for future calls without a new fixture?
As you wrote, you cannot call the fixture multiple times, as it will re-create
the function each time. However, you could just create the callable before using it:
def test_something(dummy_name):
dummy_admin = dummy_name(name="Dummy Admin")
print(next(dummy_admin))
print(next(dummy_admin))
print(next(dummy_admin))
dummy_user = dummy_name(name="Dummy User")
print(next(dummy_user))
print(next(dummy_user))
which prints:
Dummy Admin
Dummy Admin_2
Dummy Admin_3
Dummy User
Dummy User_2
Related
Here are my snippets
module1.py
class Client:
def __init__(self):
self.api_client = APIClient()
def get_resources(self):
#this method gets some data
#end returns list with dictionaries
return [{k1:v1},{k2:v2} ...]
module2
config = {}
def add_config(resource):
#process the data pass by resource
config[resource[k1]] = data
def instantiate_config():
for item in Client().get_resources()
add_config(item)
So I want to test this instantiate_config with pytest. Here is my try:
#patch('module1.Client.get_resources')
def test_instantiate_config(self, client_mock):
dummy_data = {some_dummy_data}
#it is a copy of the list, returned form Client().get_resources()
client_mock.get_resources.returned_values = dummy_data
instantiate_config()
assert 'key1' in config #config is the same config from module2
But this gives empty config dict. I don't know is that possible - to mock Client().get_resources() to give it same value and that value to be passed automatically as argument to add_config_func. If it it not what is the best way to test instantiate_config function. Not sure is in clear or not, cuz it is a little bit long story
Your mock is already representing the method get_resources, and you have a typo in "returned_values". Change to : client_mock.return_value = dummy_data
I created a class to make my life easier while doing some integration tests involving workers and their contracts. The code looks like this:
class ContractID(str):
contract_counter = 0
contract_list = list()
def __new__(cls):
cls.contract_counter += 1
new_entry = super().__new__(cls, f'Some_internal_name-{cls.contract_counter:10d}')
cls.contract_list.append(new_entry)
return new_entry
#classmethod
def get_contract_no(cls, worker_number):
return cls.contract_list[worker_number-1] # -1 so WORKER1 has contract #1 and not #0 etc.
When I'm unit-testing the class, I'm using the following code:
from test_helpers import ContractID
#pytest.fixture
def get_contract_numbers():
test_string_1 = ContractID()
test_string_2 = ContractID()
test_string_3 = ContractID()
return test_string_1, test_string_2, test_string_3
def test_contract_id(get_contract_numbers):
assert get_contract_ids[0] == 'Some_internal_name-0000000001'
assert get_contract_ids[1] == 'Some_internal_name-0000000002'
assert get_contract_ids[2] == 'Some_internal_name-0000000003'
def test_contract_id_get_contract_no(get_contract_numbers):
assert ContractID.get_contract_no(1) == 'Some_internal_name-0000000001'
assert ContractID.get_contract_no(2) == 'Some_internal_name-0000000002'
assert ContractID.get_contract_no(3) == 'Some_internal_name-0000000003'
with pytest.raises(IndexError) as py_e:
ContractID.get_contract_no(4)
assert py_e.type == IndexError
However, when I try to run these tests, the second one (test_contract_id_get_contract_no) fails, because it does not raise the error as there are more than three values. Furthermore, when I try to run all my tests in my folder test/, it fails even the first test (test_contract_id), which is probably because I'm trying to use this function in other tests that run before this test.
After reading this book, my understanding of fixtures was that it provides objects as if they were never called before, which is obviously not the case here. Is there a way how to tell the tests to use the class as if it hasn't been used before anywhere else?
If I understand that correctly, you want to run the fixture as setup code, so that your class has exactly 3 instances. If the fixture is function-scoped (the default) it is indeed run before each test, which will each time create 3 new instances for your class. If you want to reset your class after the test, you have to do this yourself - there is no way pytest can guess what you want to do here.
So, a working solution would be something like this:
#pytest.fixture(autouse=True)
def get_contract_numbers():
test_string_1 = ContractID()
test_string_2 = ContractID()
test_string_3 = ContractID()
yield
ContractID.contract_counter = 0
ContractID.contract_list.clear()
def test_contract_id():
...
Note that I did not yield the test strings, as you don't need them in the shown tests - if you need them, you can yield them, of course. I also added autouse=True, which makes sense if you need this for all tests, so you don't have to reference the fixture in each test.
Another possibility would be to use a session-scoped fixture. In this case the setup would be done only once. If that is what you need, you can use this instead:
#pytest.fixture(autouse=True, scope="session")
def get_contract_numbers():
test_string_1 = ContractID()
test_string_2 = ContractID()
test_string_3 = ContractID()
yield
Given a class with class methods that contain only self input:
class ABC():
def __init__(self, input_dict)
self.variable_0 = input_dict['variable_0']
self.variable_1 = input_dict['variable_1']
self.variable_2 = input_dict['variable_2']
self.variable_3 = input_dict['variable_3']
def some_operation_0(self):
return self.variable_0 + self.variable_1
def some_operation_1(self):
return self.variable_2 + self.variable_3
First question: Is this very bad practice? Should I just refactor some_operation_0(self) to explicitly take the necessary inputs, some_operation_0(self, variable_0, variable_1)? If so, the testing is very straightforward.
Second question: What is the correct way to setup my unit test on the method some_operation_0(self)?
Should I setup a fixture in which I initialize input_dict, and then instantiate the class with a mock object?
#pytest.fixture
def generator_inputs():
f = open('inputs.txt', 'r')
input_dict = eval(f.read())
f.close()
mock_obj = ABC(input_dict)
def test_some_operation_0():
assert mock_obj.some_operation_0() == some_value
(I am new to both python and general unit testing...)
Those methods do take an argument: self. There is no need to mock anything. Instead, you can simply create an instance, and verify that the methods return the expected value when invoked.
For your example:
def test_abc():
a = ABC({'variable_0':0, 'variable_1':1, 'variable_2':2, 'variable_3':3))
assert a.some_operation_0() == 1
assert a.some_operation_1() == 5
If constructing an instance is very difficult, you might want to change your code so that the class can be instantiated from standard in-memory data structures (e.g. a dictionary). In that case, you could create a separate function that reads/parses data from a file and uses the "data-structure-based" __init__ method, e.g. make_abc() or a class method.
If this approach does not generalize to your real problem, you could imagine providing programmatic access to the key names or other metadata that ABC recognizes or cares about. Then, you could programmatically construct a "defaulted" instance, e.g. an instance where every value in the input dict is a default-constructed value (such as 0 for int):
class ABC():
PROPERTY_NAMES = ['variable_0', 'variable_1', 'variable_2', 'variable_3']
def __init__(self, input_dict):
# implementation omitted for brevity
pass
def some_operation_0(self):
return self.variable_0 + self.variable_1
def some_operation_1(self):
return self.variable_2 + self.variable_3
def test_abc():
a = ABC({name: 0 for name in ABC.PROPERTY_NAMES})
assert a.some_operation_0() == 0
assert a.some_operation_1() == 0
I'm creating a class using tkinter that lets you input multiple product's information, and I've got everything else down except for changing the entry fields to set values for the other products.
I'm putting the product changeover process into a function called saveVars which saves the entered information to the specific product variable, and then clears the entry fields, and switches the saveVars to be performed on the second product variable.
i = 1
def saveVars(i):
if i == 1:
product1.productName = self.prodName.get()
product1.productID = self.prodID.get()
product1.productSize = self.prodSize.get()
product1.productPrice = self.prodPrice.get()
product1.productQuant = self.quantity.get()
elif i == 2:
product2.productName = self.prodName.get()
product2.productName = self.prodID.get()
product2.productSize = self.prodSize.get()
product2.productPrice = self.prodPrice.get()
product2.productQuant = self.quantity.get()
elif i == 3:
product3.productName = self.prodName.get()
product3.productName = self.prodID.get()
product3.productSize = self.prodSize.get()
product3.productPrice = self.prodPrice.get()
product3.productQuant = self.quantity.get()
newProduct()
i += 1
return i
I'm expecting to get it to switch the variable the entries are being saved to to the next respective product based on a +1 function, I'm having it return the i function as the new i, which should then save the entries to the next variable in the process, but it keeps telling me that I'm 'missing 1 required positional argument: 'i'
You are trying to call the function without parameter as the command attribute does not take parameters just the name of the function.
To pass parameters you could use partial form functools package
import statement :
from functools import partial
your call to function would look like :
addItemButton = Button(window, text = "Add to Cart", fg='black',bg='yellow',width = 10, height = 2, command = partial(saveVars,i)) addItemButton.place(x=800,y=375)
You can set the inital value to i in your class using a global variable instead of declaring it outside the class. You can save it anywhere you wish and pass it while calling the funtion.
It seems like you are calling saveVars without the parameter in some case. To prevent this you can set a default value of i, eg.
def saveVars(i=0)
I'm trying to refactor a very repetitive section of code.
I have a class that has two instance variables that get updated:
class Alerter(object):
'Sends email regarding information about unmapped positions and trades'
def __init__(self, job):
self.job = job
self.unmappedPositions = None
self.unmappedTrades = None
After my code going through some methods, it creates a table and updates self.unmappedPositions and self.unmappedTrades:
def load_positions(self, filename):
unmapped_positions_table = etl.fromcsv(filename)
if 'positions' in filename:
return self.add_to_unmapped_positions(unmapped_positions_table)
else:
return self.add_to_unmapped_trades(unmapped_positions_table)
So I have two functions that essentially do the same thing:
def add_to_unmapped_trades(self, table):
if self.unmappedTrades:
Logger.info("Adding to unmapped")
self.unmappedTrades = self.unmappedTrades.cat(
table).cache()
else:
Logger.info("Making new unmapped")
self.unmappedTrades = table
Logger.info("Data added to unmapped")
return self.unmappedTrades
And:
def add_to_unmapped_positions(self, table):
if self.unmappedPositions:
Logger.info("Adding to unmapped")
self.unmappedPositions = self.unmappedPositions.cat(
table).cache()
else:
Logger.info("Making new unmapped")
self.unmappedPositions = table
Logger.info("Data added to unmapped")
return self.unmappedPositions
I tried making it one method so that it just passes in a third argument and then figures out what to update. The third argument being the intialized variable, either self.unmappedPositions or self.unmappedTrades. However, that doesn't seem to work. Any other suggestions?
It looks like you've had the key insight that you can write this function independent of any particular storage:
def add_to_unmapped(unmapped, table):
if unmapped:
Logger.info("Adding to unmapped")
unmapped = unmapped.cat(table).cache()
else:
Logger.info("Making new unmapped")
unmapped = table
Logger.info("Data added to unmapped")
return unmapped
This is actually good practice on its own. For instance, you can write unit tests for it, or if you have two tables (as you do) you can just write the implementation for it once.
If you consider what, abstractly, your two add_to_unmapped_* functions do, they:
Compute the new table;
Save the new table in the object; and
Return the new table.
We've now separated out step 1, and you can refactor the wrappers:
class Alerter:
def add_to_unmapped_trades(self, table):
self.unmappedTrades = add_to_unmapped(self.unmappedTrades, table)
return self.unmappedTrades