Python Method Signature for Different Runtime Execution Data - python

Could someone tell me whether this idea is feasible in Python?
I want to have a method and the datatype of the signature is not fixed.
For example:
Foo(data1, data2) <-- Method Definition in Code
Foo(2,3) <---- Example of what would be executed in runtime
Foo(s,t) <---- Example of what would be executed in runtime
I know the code could work if i change the Foo(s,t) to Foo("s","t"). But I am trying to make the code smarter to recognize the command without the "" ...

singledispatch might be an answer, which transforms a function into a generic function, which can have different behaviors depending upon the type of its first argument.
You could see a concrete example in the above link. And you should do some special things if you want to do generic dispatch on more than one arguments.

Related

Assert String Upper is Called

How do I write a test to validate if a string manipulation was called or not? In this specific situation I'm trying to test that upper was called at least once. Since this is a python built in method it's a little different and I can't wrap my head around it.
# my function that returns an uppercase string example
def my_upper(str_to_upper: str) -> str:
return str(str_to_upper).upper()
# my test that should determine that .upper() was called
def test_my_upper():
# i assume I need some kind of mock here?
my_upper('a')
assert upper.call_count == 1
Update: I need to know if the core implementation of a very large product has changed. If I implement string manipulation and another dev comes in and changes how it works I want tests to immediately let me know so I can verify the implementation they added works or not.
Another update: here's what I've tried. It's complaining it can't find library 'str'.
from mock import patch
#patch("str.upper")
def test_my_upper(mock_upper):
my_upper('a')
assert mock_upper.call_count == 1
Answer for the main question: you can use pytest spy
Reference: https://pypi.org/project/pytest-mock/
def test_spy_method(mocker):
spy = mocker.spy(str, 'upper')
spy.assert_has_called_once(21) # more advanced method could be used such as the ones with count
Answer the Update: It depends entirely of the implemented tests, if you have a plenty of ones which tests each of one the behaviors from it you MIGHT get a change. But if the change do something that keeps the tests results maybe tests will continue to work. Nitpick, tests isn't for this kind of matter, but to check the good behavior of a piece of software Thus, if the code changes and regression tests continue work as expected within the threshold coverage that shouldn't be a problem.
Answer for the another Update: Indeed, str isn't an imported object, at least not that you are used to be. And for what I've understood from the question you want to know about calls from a given method from str, this use cases fits perfectly by spy. Another point, is that you don't need the create a wrapper for a method to get something from test, actually the code should "live" a part from tests.

How to get a GObject.Callback from PyGObject?

I'm trying to add callback functions to a Gtk.Builder using Gtk.Builder.add_callback_symbol. I tried to pass a python function to it, but that does not work. Documentation says I need to pass a GObject.Callback instead, so I tried to cast one by calling GObject.Callback(myfunc) but got a NotImplementedError. The C-Documentation on GCallback says I need to use something called G_CALLBACK to typecast. But there does not seem to be any reference to this in PyGObject and I'm lost at that point.
I would like to say beforehand, that I know callback can be also added by using 'Gtk.Builder.connect_signals', but that's not the question here.
The GObject.Callback function is just there for documentation purposes atm. You can just pass a function matching the signature of the callback type, in this case a function which doesn't take any arguments and has no return value.

How to ensure there are no run time errors in python

I am quite new to python programming and working on a quite large python module using IntelliJ. I was wondering if I have a static method that is called from multiple places and I change the signature of this method to accept a different no. of arguments. I might fix the actual method calls in some places but might miss changing the calls in some other. In java, I will receive a compile time error but since python is interpreted I will only figure out that sometime during runtime(probably in production). Till now I have been using the command 'python -m compileall' but I was wondering like in Java is there any way to get syntax errors in IntelliJ.
Unit tests and static code analysis tools such as pylint will help. pylint is able to detect incorrect number of arguments being passed to functions.
If you're using Python 3, function annotations might be useful, and mypy can type check annotated function calls (apparently, I've not used it).
In general the strategy for changing existing function signatures without breaking dependent code is to use keyword arguments. For example, if you wanted to add a new argument to a function, add it as a keyword argument:
#def f(a):
# """Original function"""
# print(a)
def f(a, b=None):
"""New and improved function"""
print(a)
if b is not None:
print(b)
Now calls with and without the new argument will work:
>>> f('blah')
blah
>>> f('blah', 'cough')
blah
cough
Of course this will not always work, e.g. if argument(s) are removed, or if the semantics of the function are changed in a way that breaks existing code.

Hashing a Python function

def funcA(x):
return x
Is funcA.__code__.__hash__() a suitable way to check whether funcA has changed?
I know that funcA.__hash__() won't work as it the same as id(funcA) / 16. I checked and this isn't true for __code__.__hash__(). I also tested the behaviour in a ipython terminal and it seemed to hold. But is this guaranteed to work?
Why
I would like to have a way of comparing an old version of function to a new version of the same function.
I'm trying to create a decorator for disk-based/long-term caching. Thus I need a way to identify if a function has changed. I also need to look at the call graph to check that none of the called functions have changed but that is not part of this question.
Requirements:
Needs to be stable over multiple calls and machines. 1 says that in Python 3.3 hash() is randomized on each start of a new instance. Although it also says that "HASH RANDOMIZATION IS DISABLED BY DEFAULT". Ideally, I'd like a function that does is stable even with randomization enabled.
Ideally, it would yield the same hash for def funcA: pass and def funcB: pass, i.e. when only the name of the function changes. Probably not necessary.
I only care about Python 3.
One alternative would be to hash the text inside the file that contains the given function.
Yes, it seems that func_a.__code__.__hash__() is unique to the specific functionality of the code. I could not find where this is implemented, or where it is __code__.__hash__() defined.
The perfect way would be to use func_a.__code__.co_code.__hash__() because co_code has the byte code as a string. Note that in this case, the function name is not part of the hash and two functions with the same code but names func_a and func_b will have the same hash.
hash(func_a.__code__.co_code)
Source.

Functions with dependencies passed as parameters

I'm working on a project where I'm batch generating XML files which can import to the IDE of an industrial touchscreen.
Each XML file represents a screen, and most screens require the same functions and the process for dealing with them is the same, with the exception of the fact that each screen type has a unique configuration function.
I'm using a ScreenType class to hold attributes specific to a screen type, so I decided to write a unique configuration for each type, and pass it as a parameter to the __init__() of this class. This way, when I pass around my ScreenType as it is needed, it's configuration function will stay bundled and can be used whenever needed.
But I'm not sure what will happen if my configuration function itself has a dependency. For example:
def configure_inputdiag(a, b, c):
numerical_formatting = get_numerics(a)
# ...
return configured_object
Then, when it comes time to create an instance of a ScreenType
myscreentype = ScreenType(foo, man, shoe, configure_inputdiag)
get_numerics is a module scoped function, but myscreentype could (and does) get passed within other modules.
Does this create a problem with dependencies? I'd try to test it myself, but it seems like I don't have a fundamental understanding behind what's going on when I pass a function as a parameter. I don't want to draw incorrect conclusions about what's happening.
What I've tried: Googling, Search SO, and I didn't find anything specifically for Python.
Thanks in advance.
There's no problem.
The function configure_inputdiag will always refer to get_numerics in the context where it was defined. So, even if you call configure_inputdiag from some other module which knows nothing about get_numerics, it will work fine.
Passing a function as a parameter produces a reference to that function. Through that reference, you can call the function as if you had called it by name, without actually knowing the name (or the module from which it came). The reference is valid for the lifetime of the program, and will always refer to the same function. If you store the function reference, it basically becomes a different name for the same function.
What you are trying to do works in a very natural form in Python -
In the exampe above, you don't need to have the "get_numerics" function imported in the namespace (module) where the "configure_inputdiag" is - you just pass it as a normal parameter (say, call it "function") and you are going like in this example:
Module A:
def get_numerics(parm):
...
input diag = module_B.configure_inputdiag(get_numerics, a)
Module B:
def configure_inputdiag(function, parm):
result = function(parm)
Oh - I saw your doubt iwas the other waya round - anyway, there is no problem - in Python, functions are first class objects- jsut like ints and strings, and they can be passed around as parametrs to other functions in other modules as you wish. I think the example above clarifies that.
get_numerics is resolved in the scope of the function body, so it does not also need to be in the scope of the caller.

Categories

Resources