Assert String Upper is Called - python

How do I write a test to validate if a string manipulation was called or not? In this specific situation I'm trying to test that upper was called at least once. Since this is a python built in method it's a little different and I can't wrap my head around it.
# my function that returns an uppercase string example
def my_upper(str_to_upper: str) -> str:
return str(str_to_upper).upper()
# my test that should determine that .upper() was called
def test_my_upper():
# i assume I need some kind of mock here?
my_upper('a')
assert upper.call_count == 1
Update: I need to know if the core implementation of a very large product has changed. If I implement string manipulation and another dev comes in and changes how it works I want tests to immediately let me know so I can verify the implementation they added works or not.
Another update: here's what I've tried. It's complaining it can't find library 'str'.
from mock import patch
#patch("str.upper")
def test_my_upper(mock_upper):
my_upper('a')
assert mock_upper.call_count == 1

Answer for the main question: you can use pytest spy
Reference: https://pypi.org/project/pytest-mock/
def test_spy_method(mocker):
spy = mocker.spy(str, 'upper')
spy.assert_has_called_once(21) # more advanced method could be used such as the ones with count
Answer the Update: It depends entirely of the implemented tests, if you have a plenty of ones which tests each of one the behaviors from it you MIGHT get a change. But if the change do something that keeps the tests results maybe tests will continue to work. Nitpick, tests isn't for this kind of matter, but to check the good behavior of a piece of software Thus, if the code changes and regression tests continue work as expected within the threshold coverage that shouldn't be a problem.
Answer for the another Update: Indeed, str isn't an imported object, at least not that you are used to be. And for what I've understood from the question you want to know about calls from a given method from str, this use cases fits perfectly by spy. Another point, is that you don't need the create a wrapper for a method to get something from test, actually the code should "live" a part from tests.

Related

Setting a variable to a parameter value inline when calling a function

In other languages, like Java, you can do something like this:
String path;
if (exists(path = "/some/path"))
my_path = path;
the point being that path is being set as part of specifying a parameter to a method call. I know that this doesn't work in Python. It is something that I've always wished Python had.
Is there any way to accomplish this in Python? What I mean here by "accomplish" is to be able to write both the call to exists and the assignment to path, as a single statement with no prior supporting code being necessary.
I'll be OK with it if a way of doing this requires the use of an additional call to a function or method, including anything I might write myself. I spent a little time trying to come up with such a module, but failed to come up with anything that was less ugly than just doing the assignment before calling the function.
UPDATE: #BrokenBenchmark's answer is perfect if one can assume Python 3.8 or better. Unfortunately, I can't yet do that, so I'm still searching for a solution to this problem that will work with Python 3.7 and earlier.
Yes, you can use the walrus operator if you're using Python 3.8 or above:
import os
if os.path.isdir((path := "/some/path")):
my_path = path
I've come up with something that has some issues, but does technically get me where I was looking to be. Maybe someone else will have ideas for improving this to make it fully cool. Here's what I have:
# In a utility module somewhere
def v(varname, arg=None):
if arg is not None:
if not hasattr(v, 'vals'):
v.vals = {}
v.vals[varname] = arg
return v.vals[varname]
# At point of use
if os.path.exists(v('path1', os.path.expanduser('~/.harmony/mnt/fetch_devqa'))):
fetch_devqa_path = v('path1')
As you can see, this fits my requirement of no extra lines of code. The "variable" involved, path1 in this example, is stored on the function that implements all of this, on a per-variable-name basis.
One can question if this is concise and readable enough to be worth the bother. For me, the verdict is still out. If not for the need to call the v() function a second time, I think I'd be good with it structurally.
The only functional problem I see with this is that it isn't thread-safe. Two copies of the code could run concurrently and run into a race condition between the two calls to v(). The same problem is greatly magnified if one fails to choose unique variable names every time this is used. That's probably the deal killer here.
Can anyone see how to use this to get to a similar solution without the drawbacks?

How to properly unit test code inside of a Python function

How can I unit test variables/values that are inside a function?
This is a rather basic question about unit testing (I am using pytest) that is I want to make sure all my code behaves as expected.
There must be a proper way to do that, but I didn't find out. As of yet, I try to split up my code in as many functions as possible to get as many return values as possible that I can test. But I am not able to test inside those functions.
Here I can only test if the return value interp_function is working as expected, but in no way to test the rest of the code.
def multidim_interp_function(grid = None, max_error_in_mm=1,
max_error_in_deg=1):
def interp_function(array_of_pts):
fct_list = create_interp(grid)
max_error = [max_error_in_mm]*3
if len(grid) == 6:
max_error.extend([max_error_in_deg]*3)
return np.column_stack([error*fct_list[i](array_of_pts) for i,error in enumerate(max_error)])
return interp_function
You don't need to test the implementation of your function. If you wrote foo = 'bar' inside your function, then you don't need to test whether foo correctly has been assigned the value 'bar'; you can just expect that to work. With unit tests you want to be one step more abstract. You want to check whether your function multidim_interp_function returns the correct results given some known input. Treat the function like you'd treat other functions in Python: you wouldn't write a unit test to figure out how max() works internally, instead you'd write a test asserting that max(3, 4) returns the value 4.
Not only is it impractical to test the "internals" of a function, it's that the internals of a function may change. If you're writing your unit tests and you figure out you have some bug in your code, then you're going to change your code. Or you may later come back and refactor the internals to make them more efficient, or to deduplicate some code in your module, or whatever. You wouldn't want to rewrite your unit tests each time. So you shouldn't write your unit tests to be too specific. Your unit tests should test the public interface of the function (arguments and return values), so you can assure that this interface doesn't change (i.e. the function continues to behave the same) even if you move some code around. If through your unit test or otherwise you figure out that there’s some misbehavior inside the function, then you can step in with a debugger and confirm each single statement one by one to find where an issue is.
To get into the right mindset, try Test-Driven Development, in which you write your tests firsts, essentially deciding on how the function should behave, and then you implement the function internals, all the while being able to test that your implementation conforms to the expectation.
Basically, you want your function to behave the same and so, results should be predicted. You don't want to test internal state of your function, but rather the output.
def my_calculus(divise, multiply):
def inner(value):
return value * multiply / divide
return inner
calculus = my_calculus(2, 4)
#pytest.mark.parameterize("function,value,expected", [
(calculus, 2, 4),
(calculus, 5, 10)
])
def test_my_calculus(function, value, expected):
assert function(value) == excepted

Does it make sense to write unit tests that are just testing if functions are called?

I have the following code:
def task_completed(task):
_update_status(task, TaskStatus.COMPLETED)
successful_task(task)
def task_pending(task):
_update_status(task, TaskStatus.PENDING)
successful_task(task)
def task_canceled(task):
_update_status(task, TaskStatus.CANCELED)
process_task(task)
def successful_task(task):
process_task(task)
send_notification(task)
def process_task(task):
assign_user(task)
notify_user(task)
cleanup(task)
def _update_status(task, status):
task.status = status
task.save(update_fields=['status'])
I have written the following tests:
def test_task_completed(mocker, task):
mock_successful_task = mocker.patch('services.successful_task')
task_completed(task)
assert task.status == TaskStatus.COMPLETED
mock_successful_task.called_once_with(task)
def test_task_pending(mocker, task):
mock_successful_task = mocker.patch('services.successful_task')
task_pending(task)
assert task.status == TaskStatus.PENDING
mock_successful_task.called_once_with(task)
def test_task_canceled(mocker, task):
mock_process_task = mocker.patch('services.process_task')
task_pending(task)
assert task.status == TaskStatus.CANCELED
mock_process_task.called_once_with(task)
def test_successful_task(mocker, task):
mock_process_task = mocker.patch('services.process_task')
mock_send_notification = mocker.patch('notifications.send_notification')
mock_process_task.called_once_with(task)
mock_send_notification.called_once_with(task)
def test_process_task(mocker, task):
mock_assign_user = mocker.patch('users.assign_user')
mock_notify_user = mocker.patch('notifications.notify_user')
mock_cleanup = mocker.patch('utils.cleanup')
mock_assign_user.called_once_with(task)
mock_notify_user.called_once_with(task)
mock_cleanup.called_once_with(task)
As you can see some tests like test_successful_task and test_process_task are just testing if specific functions are called.
But does it make sense to write a test for this or do I understand something wrong and my unit tests are just bad? I don't know another solution how I should test these functions.
In my experience, tests like these are very brittle because they depend on implementation details. A unit test should only be concerned with the results of the method being tested. Ideally, this means asserting against the return value. If there are side effects, you can assert those instead. But then you should probably look at those side effects and find a different solution that doesn't require them.
I would say no, they're not useful.
Unit tests are supposed to test functionality, here's some input, I call this, here's my result, does it match what I expect? Things needs to be clear and verifiable.
When you have a test which verifies a method has been called, what do you really have?
Pure uncertainty. Ok, a thing has been called, but how is that useful? You are not verifying a result, the method you're calling can do a million things and you have no idea what it does.
Code calling a method is an implementation detail and your unit tests are not supposed to have that kind of knowledge.
Why do we write unit tests?
- to check functionality
- to help refactoring
If you need to change your tests every time your code changes then you haven't really accomplished one of the main reasons for unit testing.
If your code changes and that method is not called anymore, then what?
You now have to go and change the test? Change it to what though? If your next move is to remove the test then why did you have in the first place?
What if someone else has to deal with this issue, 6 months down the road? There is no documentation to check and see why is there a test checking a method has been called?
Bottom line, a test like this has zero value and all it does is introduce uncertainty.
White box tests can be useful to detect some regression or to assert that a specific action is made.
For example you can verify that you don't interact with your DB in this particular case or that you've correctly called the notification service.
However, the drawback is that you're likely to change the test when you change the code, because you're test is very tied to the implementation.
This can be painful when you are refactoring, because you also need to refactor the test. You could forget an assertion or a step and create a false positive test with a regression.
I would use it only if it makes sens and if you need it to assert what's going on in details.
You can search on the web TDD: London vs Detroit.
You'll find interesting stuff.
This is not exactly the purpose of unit tests, though it does have uses. Unit tests aim to improve the quality of the code by testing functionality and results-- it would be more beneficial to write unit tests to test the functionality of each method called.
With that being said, if you have one function that calls 4 other functions and you want to check if they actually get executed in your main block of code, then this makes sense. But you should definitely be writing unit tests for your submethods as well.
Yes, it makes sense. However, I would look at unittest.mock.Mock.assert_called_with
In my experience, yes.
When you design a test, you know that you have to deal with 4 elements
pre-conditions (context)
inputs
outputs
post-conditions (side effects)
We all agree that it would be more easy to test and code if the beahviour of the function under test depended only from inputs and outputs, but in some case this cannot happen, especially when your logic deals with I/O and/or its goal is to issue a mutation of the application state. This means that your test have to be aware of post-conditions. But what can make us sure that a post-condition is met?
pick this method
public class UserService
{
public void addUser(User toAdd)
}
this method adds a user on the database; in a more elegant way, we can say that adds the User to a collection, which is abstracted by the repository semantic. So the side effect of the method is that a userRepository.save(User user) is called. You can mock this method and expect that has been called once with given argument, or make the test fail
Obiviously this can be achivied only if the method has been mocked, so the test is not influenced from the behaviour of a unit that is not under test.
I recognize that the drawback is to make the test brittle, but at the same time
in tdd, it makes fail tests that don't call the mocked function, so the test states "hey, the addUser relies upon the UserRepository.save()!" if you're into this style
The test will be broken if the dependent function interface changes, but we don't want to change interface often, am I right?
before to add dependencies to your method you will think twice, this is an hint to write cleaner code

Python Method Signature for Different Runtime Execution Data

Could someone tell me whether this idea is feasible in Python?
I want to have a method and the datatype of the signature is not fixed.
For example:
Foo(data1, data2) <-- Method Definition in Code
Foo(2,3) <---- Example of what would be executed in runtime
Foo(s,t) <---- Example of what would be executed in runtime
I know the code could work if i change the Foo(s,t) to Foo("s","t"). But I am trying to make the code smarter to recognize the command without the "" ...
singledispatch might be an answer, which transforms a function into a generic function, which can have different behaviors depending upon the type of its first argument.
You could see a concrete example in the above link. And you should do some special things if you want to do generic dispatch on more than one arguments.

simplest way of parameterizing tests in python?

I have a library with a bunch of different objects that have similar expected behavior, thus I want to run similar tests on them, but not necessarily identical tests on them.
To be specific lets say I have some sorting functions, and a test for checking if a sorting function actually sorts. Some sorting functions are intended to work on some inputs, but not others.
I'd like to write something close to the code below. However, nose won't do a good job of telling me exactly where the tests failed and with what inputs. If check_sort fails for sort2 on case2, I won't be able to see that.
def check_sort(sortalg, vals):
assert sortalg(vals) == sort(vals)
def test_sorts():
case1 = [1,3,2]
case2 = [2,31,1]
check_sort(sort1, case1)
for c in [case1, case2]:
check_sort(sort2, c)
I would like to be able to simply add a decorator to check_sort to tell nose that it's a nested test. Something like
#nested_test
def check_sort(sortalg, vals):
assert sortalg(vals) == sort(vals)
The idea being that when it gets called, it registers itself with nose and will report its inputs if it fails.
It looks like pytest provides pytest.mark.parameterized, but that seems rather awkward to me. I have to put all my arguments above the function in one spot, so I can't call it repeatedly in my tests. I also don't think this supports nesting more than one level.
Nose also provides test generators, which seems closer, but still not as clear as I would hope.
Using the provided generative test feature is the likely intended way to do it with nose and py.test.
That said, you can dynamically add functions (tests) to a class after it has been created. That is the technique used by the Lib/test/test_decimal.py code in the standard library.
You can use nose-ittr, its a nose extension for supporting parametrized testing.
example:
#ittr(case1=[1,3,2],case2=[2,31,1])
def test_sorts():
check_sort(sort1, self.case1)
check_sort(sort2, self.case2)

Categories

Resources