AssertIsNotNone Odd Behavior - python

In django I am writing a test that is checking whether or not a value is null.
This seems to be a pretty standard procedure, but for some reason when i pass a value to the method
assert self.assertIsNotNone(foo)
even when the value is most certainly not None, the assertion still fails.
Another odd thing is that the following if block passes even though its the same intended behavior as the assertIsNotNone function.
foo = Load.objects.all().first()
if foo is not None:
print("Passed")
For context, the foo variable in the first example is an instance of a django model. And I repeat, the object is most definitively not None.
Does anybody have any idea what would be causing something like this?
And if my understanding of the function is incorrect please let me know.
Full code:
foo = Load.objects.all().first()
if foo is not None: # passes
print("Passed")
assert self.assertIsNotNone(foo) # fails

Related

Testing for None on a non Optional input parameter

Let's say I have a python module with the following function:
def is_plontaria(plon: str) -> bool:
if plon is None:
raise RuntimeError("None found")
return plon.find("plontaria") != -1
For that function, I have the unit test that follows:
def test_is_plontaria_null(self):
with self.assertRaises(RuntimeError) as cmgr:
is_plontaria(None)
self.assertEqual(str(cmgr.exception), "None found")
Given the type hints in the function, the input parameter should always be a defined string. But type hints are... hints. Nothing prevents the user from passing whatever it wants, and None in particular is a quite common option when previous operations fail to return the expected results and those results are not checked.
So I decided to test for None in the unit tests and to check the input is not None in the function.
The issue is: the type checker (pylance) warns me that I should not use None in that call:
Argument of type "None" cannot be assigned to parameter "plon" of type "str" in function "is_plontaria"
Type "None" cannot be assigned to type "str"
Well, I already know that, and that is the purpose of that test.
Which is the best way to get rid of that error? Telling pylance to ignore this kind of error in every test/file? Or assuming that the argument passed will be always of the proper type and remove that test and the None check in the function?
This is a good question. I think that silencing that type error in your test is not the right way to go.
Don't patronize the user
While I would not go so far as to say that this is universally the right way to do it, in this case I would definitely recommend getting rid of your None check from is_plontaria.
Think about what you accomplish with this check. Say a user calls is_plontaria(None) even though you annotated it with str. Without the check he causes an AttributeError: 'NoneType' object has no attribute 'find' with a traceback to the line return plon.find("plontaria") != -1. The user thinks to himself "oops, that function expects a str". With your check he causes a RuntimeError ideally telling him that plon is supposed to be a str.
What purpose did the check serve? I would argue none whatsoever. Either way, an error is raised because your function was misused.
What if the user passes a float accidentally? Or a bool? Or literally anything other than a str? Do you want to hold the user's hand for every parameter of every function you write?
And I don't buy the "None is a special case"-argument. Sure, it is a common type to be "lying around" in code, but that is still on the user, as you pointed out yourself.
If you are using properly type annotated code (as you should) and the user is too, such a situation should never happen. Say the user has another function foo that he wants to use like this:
def foo() -> str | None:
...
s = foo()
b = is_plontaria(s)
That last line should cause any static type checker worth its salt to raise an error, saying that is_plontaria only accepts str, but a union of str and None was provided. Even most IDEs mark that line as problematic.
The user should see that before he even runs his code. Then he is forced to rethink and either change foo or introduce his own type check before calling your function:
s = foo()
if isinstance(s, str):
b = is_plontaria(s)
else:
# do something else
Qualifier
To be fair, there are situations where error messages are very obscure and don't properly tell the caller what went wrong. In those cases it may be useful to introduce your own. But aside from those, I would always argue in the spirit of Python that the user should be considered mature enough to do his own homework. And if he doesn't, that is on him, not you. (So long as you did your homework.)
There may be other situations, where raising your own type-errors makes sense, but I would consider those to be the exception.
If you must, use Mock
As a little bonus, in case you absolutely do want to keep that check in place and need to cover that if-branch in your test, you can simply pass a Mock as an argument, provided your if-statement is adjusted to check for anything other than str:
from unittest import TestCase
from unittest.mock import Mock
def is_plontaria(plon: str) -> bool:
if not isinstance(plon, str):
raise RuntimeError("None found")
return plon.find("plontaria") != -1
class Test(TestCase):
def test_is_plontaria(self) -> None:
not_a_string = Mock()
with self.assertRaises(RuntimeError):
is_plontaria(not_a_string)
...
Most type checkers consider Mock to be a special case and don't complain about its type, assuming you are running tests. mypy for example is perfectly happy with such code.
This comes in handy in other situations as well. For example, when the function being tested expects an instance of some custom class of yours as its argument. You obviously want to isolate the function from that class, so you can just pass a mock to it that way. The type checker won't mind.
Hope this helps.
You can disable type checking for on a specific line with a comment.
def test_is_plontaria_null(self):
with self.assertRaises(RuntimeError) as cmgr:
is_plontaria(None) # type: ignore
self.assertEqual(str(cmgr.exception), "None found")

best practice for multiple return type from function

I just start with python and get one question. is it a good idea to design a function return multi type of value? I read some information on sit and totally understand it is better to rise exception when an error is encountered or a precondition is unsatisfied. but what if there is no error but just different multi type of return value? it is a dummy function but for multi_value function, i do not need to write something like multi_value()[0] if I need to access the value from function
refer:https://docs.quantifiedcode.com/python-anti-patterns/maintainability/returning_more_than_one_variable_type_from_function_call.html
from typing import Union
def multi_value(para : Union[list, int]):
return para[0] if len(para) == 1 else para
def fun(para : Union[list, int]):
return para
print(type(multi_value([1,2,3]))) #--> [1,2,3]
print(type(multi_value(['1']))) #--> '1'
print(type(multi_value([1]))) #--> 1
The idea of not having different types is to make your functions easier to use. If the caller has to check the return type he gets unnecessarily complicated, hard to read and error prone code. Even worse: the caller might not do the check at all and then gets caught by surprise. You show a nice example: returning a list or a scalar value. If you do as shown the caller has to write something like
res = multi_value(x)
try:
for i in res:
do_something_with_res(i)
except TypeError:
do_something_with_res(res)
Given your function really does not throw anything this all would collapse to
for i in multi_value(x)
do_something_with_res(i)
if you would returning single (or no) values also as lists. The advantage should be obvious. You may think you do the caller a favor - but that is just not true.
Some remark to the linked article: I think they gave a sub-optimal example on the matter. The example is more about returning error code vs raising exceptions, which is a little different.

Python Mocking assert_called not working

I am able to successfully mock a function, and I am sure that the original is not called. I added a huge print statement to the original function and when I mock it, this print is not called. When I turn the mock back on, the print statement is not called.
However, my assert_called is failing saying it was never called. Has anyone ever experienced something like this?
class FooTestCase(unittest.TestCase):
#mock.patch('MyObj.helper_function')
def test_simple(self, mock_hf):
my_obj = MyObj()
# internally, this class imports HelperModule
# and the method calls helper_function
my_obj.do_something()
mock_hf.helper_function.assert_called()
return
My error response
AssertionError: Expected 'helper_function' to have been called.
Update
I just added the following lines right before the assertion
print mock_cw.method_calls
print mock_cw.mock_calls
method_calls is an empty list, while mock_calls is a list with 1 item which is
[call(arg1_expected_for_helper_fn, arg2_expected_for_helper_fn)]
Yet the assert still fails
Usually an error like this is a result of not patching the correct location. Try to patch the object itself with this:
#patch.object(MyObj, "helper_function")
def test_simple(mock_hf):
...
Since MyObj is (assumed to be) imported at the top of the test file, this patches the method on that object directly.
The issue is that I was checking to see if mock_hf.helper_function was called, but mock_hf is already mapped to the helper_function. I was more or less checking that helper_function.helper_function was called rather than just helper_function.
The assert line needed to be
mock_hf.assert_called()
I see the original poster has done this, but for anyone else stumbling on this as I did...
Don't forget you need to wrap your expected calls in a call object e.g.
mock_logger.assert_has_calls([call(expected_log_message_1), call(expected_log_message_2)])
If you don't do that, it will complain that the expected call did not happen and you will spend ages comparing the output to try and work out why (as I did!).

Comparison of 2 code chunks - what's the difference?

So, I was working on some code trying to resolve a bug. This was the original chunk of code:
passrate = 90
for child in sorted_children:
if child.passrate >= passrate:
return child
return None
This code was buggy and this is it's fix:
passrate = 90
for child in sorted_children:
if child.passrate() >= passrate:
return child
return None
The only difference is the added parenthesis. So, child is a class and passrate() is it's method which lazy-loads and returns it's __passrate value. If it's not calculated yet, it calculates it before returning it.
When I used the debugger to see what was causing the problem, I saw that sometimes when passrate() was executing it was like code execution somehow ended up in a completely wrong instance of child's class.
I know that without the parenthesis a pointer to the function is returned, but as it's done inside a logical operation, the function should be executed immediately afterwards so the final result should be the same for both chunks of code. And sometimes it indeed was. But sometimes it wasn't for some reason, always in the same iterated child in every execution of the code.
If someone could explain what could have caused the problem, I'd appreciate it very much.
EDIT:
Thanks everyone for helping. The old code was clearly wrong. I have no idea how it worked at all in the past.
I think, as per python's rule if it's method, then it should be called with braces. If it's a property then you can call without braces as below:
class Hello(object):
#property
def hi(self):
print "hello"
def hifunc(self):
print "Hi function"
h=Hello()
print h.hi
print h.hifunc
print h.hifunc()
Output:
hello
None
<bound method Hello.hifunc of <__main__.Hello object at 0x0000000002B99358>>
Hi function
None
None is printed as my example function returns nothing. In your case, when you call with braces, your return values from function used for comparison.

Check if there's something "waiting for" the return value of a function

I'm wondering if anyone can think up a way to check if a function needs to return a meaningful value in Python. That is, to check whether the return value will be used for anything. I'm guessing the answer is no, and it is better to restructure my program flow. The function in question pulls its return values from a network socket. If the return value is not going to get used, I don't want to waste the resources fetching the result.
I tried already to use tracebacks to discover the calling line, but that didn't work. Here's an example of what I had in mind:
>>> def func():
... print should_return()
...
>>> func()
False
>>> ret = func()
True
The function "knows" that its return value is being assigned.
Here is my current workaround:
>>> def func(**kwargs):
... should_return = kwargs.pop('_wait', False)
... print should_return
...
>>> func()
False
>>> ret = func(_wait=True)
True
The very second line of the body of import this says it all: "explicit is better than implicit". In this case, if you provide an optional argument, the code will be more obvious (and thus easier to understand), simpler, faster and safer. Keep it as a separate argument with a name like wait.
While with difficulty you could implement it magically, it would be nasty code, prone to breaking in new versions of Python and not obvious. Avoid that route; there lieth the path unto madness.
All functions return a value when they complete.
If you're asking if they should return at all, then you are actually asking about The Halting Problem
One approach might be to return an object with a __del__ method that relies on the garbage collector removing the unused value some time in the future.
Note that it won't happen immediately; it might not even happen at all :)
You might consider returning a future, or 'promise'. That is, return another function that, when executed, performs the necessary work to actually determine the result. I seem to be thinking that you want lazy evaluation, which is "evaluate only what you need" (more or less), rather than your question, which confusingly asks: "Evaluate only if it returns a value, which might be needed".
i have some code that kinda works using inspect module, but it might be prone to break like others mention.
inspect.stack()[1].frame.f_code.co_names[-1]
will be holding the function name when user didn't assign return value to anything, when user assign to var with name XXX, this var will hold XXX. Code comparing this vs the function name to decide whether user assign the return value to any var
import inspect
def func():
tmp = inspect.stack()[1].frame.f_code.co_names[-1]
should_return = tmp != 'func'
print("execute") # execute something
# if should_return False, end here without fetching result
if should_return:
print("fetching result, user assign to {}".format(tmp))
# fetching result and return the result here
>>> func()
execute
>>>
>>> xxx=func()
execute
fetching result, user assign to xxx
>>>
All functions in Python always return. If you don't explicitly return, functions return None.
===========
def func():
while True:
pass
This function does not return.
There is no way of determining if an arbitrary function will return. If you can, you have solved the Turing problem.

Categories

Resources