What's the point of def some_method(param: int) syntax? - python

Specifically the ":int" part...
I assumed it somehow checked the type of the parameter at the time the function is called and perhaps raised an exception in the case of a violation. But the following run without problems:
def some_method(param:str):
print("blah")
some_method(1)
def some_method(param:int):
print("blah")
some_method("asdfaslkj")
In both cases "blah" is printed - no exception raised.
I'm not sure what the name of the feature is so I wasn't sure what to google.
EDIT: OK, so it's http://www.python.org/dev/peps/pep-3107/. I can see how it'd be useful in frameworks that utilize metadata. It's not what I assumed it was. Thanks for the responses!
FOLLOW-UP QUESTION - Any thoughts on whether it's a good idea or bad idea to define my functions as def some_method(param:int) if I really only can handle int inputs - even if, as pep 3107 explains, it's just metadata - no enforcement as I originally assumed? At least the consumers of the methods will see clearly what I intended. It's an alternative to documentation. Think this is good/bad/waste of time? Granted, good parameter naming (unlike my contrived example) usually makes it clear what types are meant to be passed in.

it's not used for anything much - it's just there for experimentation (you can read them from within python if you want, for example). they are called "function annotations" and are described in pep 3107.
i wrote a library that builds on it to do things like type checking (and more - for example you can map more easily from JSON to python objects) called pytyp (more info), but it's not very popular... (i should also add that the type checking part of pytyp is not at all efficient - it can be useful for tracking down a bug, but you wouldn't want to use it across an entire program).
[update: i would not recommend using function annotations in general (ie with no particular use in mind, just as docs) because (1) they might eventually get used in a way that you didn't expect and (2) the exact type of things is often not that important in python (more exactly, it's not always clear how best to specify the type of something in a useful way - objects can be quite complex, and often only "parts" are used by any one function, with multiple classes implementing those parts in different ways...). this is a consequence of duck typing - see the "more info" link for related discussion on how python's abstract base classes could be used to tackle this...]

Function annotations are what you make of them.
They can be used for documentation:
def kinetic_energy(mass: 'in kilograms', velocity: 'in meters per second'):
...
They can be used for pre-condition checking:
def validate(func, locals):
for var, test in func.__annotations__.items():
value = locals[var]
msg = 'Var: {0}\tValue: {1}\tTest: {2.__name__}'.format(var, value, test)
assert test(value), msg
def is_int(x):
return isinstance(x, int)
def between(lo, hi):
def _between(x):
return lo <= x <= hi
return _between
def f(x: between(3, 10), y: is_int):
validate(f, locals())
print(x, y)
>>> f(0, 31.1)
Traceback (most recent call last):
...
AssertionError: Var: y Value: 31.1 Test: is_int
Also see http://www.python.org/dev/peps/pep-0362/ for a way to implement type checking.

Not experienced in python, but I assume the point is to annotate/declare the parameter type that the method expects. Whether or not the expected type is rigidly enforced at runtime is beside the point.
For instance, consider:
intToHexString(param:int)
Although the language may technically allow you to call intToHexString("Hello"), it's not semantically meaningful to do so. Having the :int as part of the method declaration helps to reinforce that.

It's basically just used for documentation. When some examines the method signature, they'll see that param is labelled as an int, which will tell them the author of the method expected them to pass an int.
Because Python programmers use duck typing, this doesn't mean you have to pass an int, but it tells you the code is expecting something "int-like". So you'll probably have to pass something basically "numeric" in nature, that supports arithmetic operations. Depending on the method it may have to be usable as an index, or it may not.
However, because it's syntax and not just a comment, the annotation is visible to any code that wants to introspect it. This opens up the possibility of writing a typecheck decorator that can enforce strict type checking on arbitrary functions; this allows you to put the type checking logic in one place, and have each method declare which parameters it wants strictly type checked (by attaching a type annotation) with a minimum on syntax, in a way that is visible to client programmers who are browsing method definitions to find out the interface.
Or you could do other things with those annotations. No standardized meaning has yet been developed. Maybe if someone comes up with a killer feature that uses them and has huge adoption, then it'll one day become part of the Python language, but I suspect the flexibility of using them however you want will be too useful to ever do that.

You might also use the "-> returnValue" notation to indicate what type the function might return.
def mul(a:int, b:int) -> None:
print(a*b)

Related

Typing for repl/infinite-loop type [duplicate]

Python's new type hinting feature allows us to type hint that a function returns None...
def some_func() -> None:
pass
... or to leave the return type unspecified, which the PEP dictates should cause static analysers to assume that any return type is possible:
Any function without annotations should be treated as having the most general type possible
However, how should I type hint that a function will never return? For instance, what is the correct way to type hint the return value of these two functions?
def loop_forever():
while True:
print('This function never returns because it loops forever')
def always_explode():
raise Exception('This function never returns because it always raises')
Neither specifying -> None nor leaving the return type unspecified seems correct in these cases.
Even though “PEP 484 — Type Hints” standard mentioned both in question and in the answer yet nobody quotes its section: The NoReturn type that covers your question.
Quote:
The typing module provides a special type NoReturn to annotate functions that never return normally. For example, a function that unconditionally raises an exception:
from typing import NoReturn
def stop() -> NoReturn:
raise RuntimeError('no way')
The section also provides examples of the wrong usages. Though it doesn’t cover functions with an endless loop, in type theory they both equally satisfy never returns meaning expressed by that special type.
In july 2016, there was no answer to this question yet (now there is NoReturn; see the new accepted answer). These were some of the reasons:
When a function doesn't return, there is no return value (not even None) that a type could be assigned to. So you are not actually trying to annotate a type; you are trying to annotate the absence of a type.
The type hinting PEP has only just been adopted in the standard, as of Python version 3.5. In addition, the PEP only advises on what type annotations should look like, while being intentionally vague on how to use them. So there is no standard telling us how to do anything in particular, beyond the examples.
The PEP has a section Acceptable type hints stating the following:
Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see below for forward references).
Annotations should be kept simple or static analysis tools may not be able to interpret the values. For example, dynamically computed types are unlikely to be understood. (This is an intentionally somewhat vague requirement, specific inclusions and exclusions may be added to future versions of this PEP as warranted by the discussion.)
So it tries to discourage you from doing overly creative things, like throwing an exception inside a return type hint in order to signal that a function never returns.
Regarding exceptions, the PEP states the following:
No syntax for listing explicitly raised exceptions is proposed. Currently the only known use case for this feature is documentational, in which case the recommendation is to put this information in a docstring.
There is a recommendation on type comments, in which you have more freedom, but even that section doesn't discuss how to document the absence of a type.
There is one thing you could try in a slightly different situation, when you want to hint that a parameter or a return value of some "normal" function should be a callable that never returns. The syntax is Callable[[ArgTypes...] ReturnType], so you could just omit the return type, as in Callable[[ArgTypes...]]. However, this doesn't conform to the recommended syntax, so strictly speaking it isn't an acceptable type hint. Type checkers will likely choke on it.
Conclusion: you are ahead of your time. This may be disappointing, but there is an advantage for you, too: you can still influence how non-returning functions should be annotated. Maybe this will be an excuse for you to get involved in the standardisation process. :-)
I have two suggestions.
Allow omitting the return type in a Callable hint and allow the type of anything to be forward hinted. This would result in the following syntax:
always_explode: Callable[[]]
def always_explode():
raise Exception('This function never returns because it always raises')
Introduce a bottom type like in Haskell:
def always_explode() -> ⊥:
raise Exception('This function never returns because it always raises')
These two suggestions could be combined.
From Python 3.11, the new bottom type typing.Never should be used to type functions that don't return, as
from typing import Never
def always_explode() -> Never:
raise
This replaces typing.NoReturn
... to make the intended meaning more explicit.
I'm guessing at some point they'll deprecate NoReturn in that context, since both are valid in 3.11.

Is it possible to reference function parameters in Python's function annotation?

I'd like to be able to say
def f(param) -> type(param): return param
but I get the NameError: name 'param' is not defined. Key thing here is that the return type is a function of a function parameter. I have glanced through the https://www.python.org/dev/peps/pep-3107/, but I don't see any precise description of what comprises a valid annotation expression.
I would accept an answer which explains why exactly is this not possible at the moment, i.e., does it not fit into current annotation paradigm or is there a technical problem with this?
There are a few issues with the type(param) method.
First off, as Oleh mentioned in his answer, all annotations must be valid at the time of the function's definition. In an example like yours, you could potentially have problems due to variable shadowing.
param = 10
def f(param) -> type(param):
return param
f('a')
Since the variable param is of type int, the function's annotation is essentially read as f(param: Any) -> int. So when you pass in the argument param with the value 'a', which means f will return a str, this makes it inconsistent with the annotation. Admittedly this example is contrived, but from a language design stand point, it is something to be careful.
Instead, as jonrsharpe mentioned, often the best way to reference the generic types of parameters (as jonrsharpe) mentioned is with type variables.
This can be done using the typing.TypeVar class.
from typing import TypeVar
def f(param: T) -> T:
return param
This means that static checkers won't need to actually access the type of param, just check that at check-time that there is a way to consider both param and the return value of the same type. I say consider the same type because you will sometimes only assert that they both implement the same abstract base class/interface, like numbers.Real.
And then can use typevars in generic types
from typing import List, TypeVar
T = TypeVar('T')
def total(items: List[T]) -> List[T]:
return [f(item) for item in items]
Using type variables and generics can be better because it adds additional information and allows for a little bit more flexibility (as explained in the example with numbers.Real). For instance, the ability to use List[T] is really important. In your case of using type(param), it would only return list, not list of like List[T] would. So using type(param) would actually lose information, not add it.
Therefore, it is a better idea to stick to using type variables and generic types instead.
TL;DR:
Due to variable shadowing, type(param) could lead to inconsistent annotations.
Since sometimes when thinking of the types of your system you are thinking in terms of interfaces (abstract base classes in Python) instead of concrete types, it can be better to rely on ABC's and type variables
Using type(param) could lose information that would be provided by generics.
Let's take a glance at PEP-484 - Type Hints # Acceptable type hints.
Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see below for forward references).
Annotations should be kept simple or static analysis tools may not be able to interpret the values. For example, dynamically computed types are unlikely to be understood. (This is an intentionally somewhat vague requirement, specific inclusions and exclusions may be added to future versions of this PEP as warranted by the discussion.)
I'd say that your approach is quite interesting and may be useful for static analysis. But if we accept PEPs as a source of an explanation for the current annotation paradigm, the highlighted text explains why return type can't be defined dynamically at the time the function is called.

When to type-check a function's arguments?

I'm asking about situations where if a wrong type of argument is passed to the function, it could:
Blow up the whole thing.
Return unexpected results
Return nothing
For instance, the function below expects the argument name to be a string. It would throw an exception for all other types that doesn't have a startswith method.
def fruits(name):
if name.startswith('O'):
print('Is it Orange?')
There are other cases where a function could halt or cause damage to the system if execution proceeds without type-checking. Whenever there are a lot of functions or functions with a lot of arguments, type checking is tedious and makes the code unreadable. So, is there a standard for doing this? As to 'how to type check' - there are plenty of examples here on stackexchange, but I couldn't find any about where it would be appropriate to do so.
Another example would be:
def fruits(names):
with open('important_file.txt', 'r+') as fil:
for name in names:
if name in fil:
# Edit the file
Here if the name is a string each character in it will influence the editing of the file. If it is any other iterable, each element provided by it would influence the editing. Both of these could produce different results.
So, when should we type-check an argument and should we not?
The answer off the top of my head would be: it depends where the input comes from.
If the functions are class methods that get invokes internally or things like that, you can assume the inputs are valid, because you wrote it!
For example
def add(x,y):
return x + y
def multiply(a,b):
product = 0
for i in range(a):
product = add(product, b)
return product
In my add function, I could check that there is a + operator for the parameters x and y. But since I wrote the multiply function, and that is the only function that uses add, it is safe to assume the inputs will be int because that's how I wrote it. Now that argument stands on shaky ground for large code bases where you (hopefully) have shared code, so you can't be sure people don't misuse your functions. But that's why you comment them well to describe the correct use of said function.
If it has to read from a file, get user input, etc, then you may want to do some validation first.
I almost never do type checking in Python. In accordance with Pythonic philosophy I assume that me and other programmers are adult people capable of reading the code (or at least the documentation) and using it properly. I assume that we test our code before we let it destroy something important. After all in most cases if you do something wrong, you'll just see an error and Python's error messages are quite informative most of the time.
The only occasion when I sometimes check types is when I want my function to behave differently depending on the argument's type. But although I sometimes feel compelled to do this, I don't consider it a good practice.
Most often it happens when my function iterates over a list of strings and I fear (or want) I could get a single string passed into it by accident - this won't throw an error at once because unfortunately string is an iterable too.

How to know function return type and argument types?

While I am aware of the duck-typing concept of Python, I sometimes struggle with the type of arguments of functions, or the type of the return value of the function.
Now, if I wrote the function myself, I DO know the types. But what if somebody wants to use and call my functions, how is he/she expected to know the types?
I usually put type information in the function's docstring (like: "...the id argument should be an integer..." and "... the function will return a (string, [integer]) tuple.")
But is looking up the information in the docstring (and putting it there, as a coder) really the way it is supposed to be done?
Edit: While the majority of answers seem to direct towards "yes, document!" I feel this is not always very easy for 'complex' types. For example: how to describe concisely in a docstring that a function returns a list of tuples, with each tuple of the form (node_id, node_name, uptime_minutes) and that the elements are respectively a string, string and integer?
The docstring PEP documentation doesn't give any guidelines on that.
I guess the counterargument will be that in that case classes should be used, but I find python very flexible because it allows passing around these things using lists and tuples, i.e. without classes.
Well things have changed a little bit since 2011! Now there's type hints in Python 3.5 which you can use to annotate arguments and return the type of your function. For example this:
def greeting(name):
return 'Hello, {}'.format(name)
can now be written as this:
def greeting(name: str) -> str:
return 'Hello, {}'.format(name)
As you can now see types, there's some sort of optional static type checking which will help you and your type checker to investigate your code.
for more explanation I suggest to take a look at the blog post on type hints in PyCharm blog.
This is how dynamic languages work. It is not always a good thing though, especially if the documentation is poor - anyone tried to use a poorly documented python framework? Sometimes you have to revert to reading the source.
Here are some strategies to avoid problems with duck typing:
create a language for your problem domain
this will help you to name stuff properly
use types to represent concepts in your domain language
name function parameters using the domain language vocabulary
Also, one of the most important points:
keep data as local as possible!
There should only be a few well-defined and documented types being passed around. Anything else should be obvious by looking at the code: Don't have weird parameter types coming from far away that you can't figure out by looking in the vicinity of the code...
Related, (and also related to docstrings), there is a technique in python called doctests. Use that to document how your methods are expected to be used - and have nice unit test coverage at the same time!
Actually there is no need as python is a dynamic language, BUT if you want to specify a return value then do this
def foo(a) -> int: #after arrow type the return type
return 1 + a
But it won't really help you much. It doesn't raise exceptions in the same way like in staticly-typed language like java, c, c++. Even if you returned a string, it won't raise any exceptions.
and then for argument type do this
def foo(a: int) -> int:
return a+ 1
after the colon (:)you can specify the argument type.
This won't help either, to prove this here is an example:
def printer(a: int) -> int: print(a)
printer("hello")
The function above actually just returns None, because we didn't return anything, but we did tell it we would return int, but as I said it doesn't help. Maybe it could help in IDEs (Not all but few like pycharm or something, but not on vscode)
I attended a coursera course, there was lesson in which, we were taught about design recipe.
Below docstring format I found preety useful.
def area(base, height):
'''(number, number ) -> number #**TypeContract**
Return the area of a tring with dimensions base #**Description**
and height
>>>area(10,5) #**Example **
25.0
>>area(2.5,3)
3.75
'''
return (base * height) /2
I think if docstrings are written in this way, it might help a lot to developers.
Link to video [Do watch the video] : https://www.youtube.com/watch?v=QAPg6Vb_LgI
Yes, you should use docstrings to make your classes and functions more friendly to other programmers:
More: http://www.python.org/dev/peps/pep-0257/#what-is-a-docstring
Some editors allow you to see docstrings while typing, so it really makes work easier.
Yes it is.
In Python a function doesn't always have to return a variable of the same type (although your code will be more readable if your functions do always return the same type). That means that you can't specify a single return type for the function.
In the same way, the parameters don't always have to be the same type too.
For example: how to describe concisely in a docstring that a function returns a list of tuples, with each tuple of the form (node_id, node_name, uptime_minutes) and that the elements are respectively a string, string and integer?
Um... There is no "concise" description of this. It's complex. You've designed it to be complex. And it requires complex documentation in the docstring.
Sorry, but complexity is -- well -- complex.
Answering my own question >10 years later, there are now 2 things I use to manage this:
type hints (as already mentioned in other answers)
dataclasses, when parameter or return type hints become unwieldy/hard to read
As an example of the latter, say I have a function
def do_something(param:int) -> list[tuple[list, int|None]]:
...
return result
I would now rewrite using a dataclass, e.g. along the lines of:
from dataclasses import dataclass
#dataclass
class Stat:
entries: list
value: int | None = None
def do_something(param:int) -> list[Stat]:
...
return result
Yes, since it's a dynamically type language ;)
Read this for reference: PEP 257
Docstrings (and documentation in general). Python 3 introduces (optional) function annotations, as described in PEP 3107 (but don't leave out docstrings)

Checking arguments in numerical Python code

I find myself writing the same argument checking code all the time for number-crunching:
def myfun(a, b):
if a < 0:
raise ValueError('a cannot be < 0 (was a=%s)' % a)
# more if.. raise exception stuff here ...
return a + b
Is there a better way? I was told not to use 'assert' for these things (though I don't see the problem, apart from not knowing the value of the variable that caused the error).
edit: To clarify, the arguments are usually just numbers and the error checking conditions can be complex, non-trivial and will not necessarily lead to an exception later, but simply to a wrong result. (unstable algorithms, meaningless solutions etc)
assert gets optimized away if you run with python -O (modest optimizations, but sometimes nice to have). One preferable alternative if you have patterns that often repeat may be to use decorators -- great way to factor out repetition. E.g., say you have a zillion functions that must be called with arguments by-position (not by-keyword) and must have their first arguments positive; then...:
def firstargpos(f):
def wrapper(first, *args):
if first < 0:
raise ValueError(whateveryouwish)
return f(first, *args)
return wrapper
then you say something like:
#firstargpos
def myfun(a, b):
...
and the checks are performed in the decorators (or rather the wrapper closure it returns) once and for all. So, the only tricky part is figuring out exactly what checks your functions need and how best to call the decorator(s) to express those (hard to say, without seeing the set of functions you're defining and the set of checks each needs!-). Remember, DRY ("Don't Repeat Yourself") is close to the top spot among guiding principles in software development, and Python has reasonable support to allow you to implement DRY and avoid boilerplatey, repetitious code!-)
You don't want to use assert because your code can be run (and is by default on some systems) in such a way that assert lines are not checked and do not raise errors (-O command line flag).
If you're using a lot of variables that are all supposed to have those same properties, why not subclass whatever type you're using and add that check to the class itself? Then when you use your new class, you know you never have an invalid value, and don't have to go checking for it all over the place.
I'm not sure if this will answer your question, but it strikes me that checking a lot of arguments at the start of a function isn't very pythonic.
What I mean by this is that it is the assumption of most pythonistas that we are all consenting adults, and we trust each other not to do something stupid. Here's how I'd write your example:
def myfun(a, b):
'''a cannot be < 0'''
return a + b
This has three distinct advantages. First off, it's concise, there's really no extra code doing anything unrelated to what you're actually trying to get done. Second, it puts the information exactly where it belongs, in help(myfun), where pythonistas are expected to look for usage notes. Finally, is a non-positive value for a really an error? Although you might think so, unless something definitely will break if a is zero (here it probably wont), then maybe letting it slip through and cause an error up the call stream is wiser. after all, if a + b is in error, it raises an exception which gets passed up the call stack and behavior is still pretty much the same.

Categories

Resources