Related
Consider having a function that returns a complex value:
def my_fn():
return (create_this(), create_that(), someotherstuff)
Assyming pylance knows what create_this() returns as well as what the other values are, it will implicitly tell you my_fn returns a Tuple[Type1, Type2, Type3].
Now let's say you have a function that expects to receive an argument that contains whatever this function returned, but you want to still get type hints. You can do this:
def process_fn_value(data: Tuple[Type1, Type2, Type3]):
...
But that's rather verbose, isn't it. It would be better to just write:
def process_fn_value(data: ReturnOf[my_fn]):
...
I have tried the following, hoping to extract the type from a function by making a generic type and then calling type() on it. But it doesn't even properly decode the type of the generic value:
T = TypeVar('T')
def RetVal(cb: Callable[[Any], T]):
return type(cb())
def test_fn():
return "test"
def test_consumer(arg: RetVal[test_fn]):
return arg
Another thing I tried, mostly after looking how Generic[T] is implemented:
class ReturnValue(Type[T], _root=True):
def __new__(func, cb: Callable[[], Generic[T]]) -> T:
return type(cb())
def test_fn():
return [1,2,3]
def test_consumer(arg: ReturnValue[test_fn]):
return arg
testtype = ReturnValue(test_fn)
None of these work.
Is there any such type hint in Python?
Note: If you think that this is a problem I shouldn't be facing if I wrote the code in such and such way, maybe you're right. But please consider sometimes one cannot change EVERYTHING and yet might be able to create at least partial improvement in the codebase.
There is such a function for currying. The problem is that I don’t know how to make this function return a decorated function with the correct types. Help, I have not found a solution anywhere.
import functools
import typing as ty
from typing import TypeVar, Callable, Any, Optional
F = TypeVar("F", bound=Callable[..., Any])
def curry(func: F, max_argc: Optional[int] = None):
if max_argc is None:
max_argc = func.__code__.co_argcount
#functools.wraps(func)
def wrapped(*args):
argc = len(args)
if argc < max_argc:
return curry(functools.partial(func, *args), max_argc - argc)
else:
return func(*args)
return ty.cast(F, wrapped)
#curry
def foo(x: int, y: int) -> int:
return x + y
foo("df")(5) # mypy error: Too few arguments for "foo"
# mypy error: "int" not callable
# mypy error: Argument 1 to "foo" has incompatible type "str"; expected "int" # True
How to fix 1, 2 mypy errors?
Interestingly enough, i tried the exact same thing, writing a decorator, which would return a curried version of any function, including generic higher-order ones.
I tried to build a curry that would allow any partitioning of the input arguments.
Denial
However, AFAIK it is not possible due to some constraints of the python type system.
I'm struggling to find a generic typesafe approach right now. I mean Haskell lives it, C++ adopted it, Typescript manages to do it as well, so why should python not be able to?
I reached out to mypy alternatives such as pyright, which has AWESOME maintainers, BUT is stull bound by PEPs, which state that some things are just not possible.
Anger
When submitting an issue with pyright for the last missing piece in my curry-chain, i boiled the issue down to the following (as can be seen in this issue: https://github.com/microsoft/pyright/issues/1720)
from typing import TypeVar
from typing_extensions import Protocol
R = TypeVar("R", covariant=True)
S = TypeVar("S", contravariant=True)
class ArityOne(Protocol[S, R]):
def __call__(self, __s: S) -> R:
...
def id_f(x: ArityOne[S, R]) -> ArityOne[S, R]:
return x
X = TypeVar("X")
def identity(x: X) -> X:
return x
i: int = id_f(identity)(4) # Does not type check, expected type `X`, got type `Literal[4]`
Mind the gap, this is a minimal reproducible example of the missing link.
What i initially attempted to do was the following (skipping the actual curry implementation, which, in comparison, resembles a walk in the park):
Write a curry decorator (without types)
Define Unary, Binary and Ternary (etc.) Protocols, which is the more modern version of the function type Callable. Coincidentally, Protocols can specify type #overloads for their __call__ methods, which brings me to the next point.
Define CurriedBinary, CurriedTernary (etc.) using Protocols with type #overloaded __call__ methods.
Define type #overloads for the curry function, e.g. Binary -> CurriedBinary, Ternary -> CurriedTernary
With this, everything was in place, and it works awesome for fixed-type functions i.e. int -> int or str -> int -> bool.
I don't have my attempted implementation on hand right now tho'.
However, when currying functions such as map, or filter it fails to match the curried, generic version of the function with the actual types.
Bargaining
This happens due to how scoping of type variables works. For more details you can take a look at the GitHub issue. It is explained in greater detail there.
Essentially what happens is that the type variable of the generic function-to-be-curried cannot be influenced by the actual type of the data-to-be-passed partially, because there is a class Protocol in between, which defines its own scope of type variables.
Trying to wrap or reorganize stuff did not yield fruitful results.
Depression
I used Protocols there to be able to represent the type of a curried function,
which is not possible with e.g. Callable, and although pyright displays the type of an overloaded function as Overload[contract1, contract2, ...] there is no such symbol, only #overload.
So either way there's something that prevents you from expressing the type you want.
It is currently not possible to represent a fully generic typesafe curry function due to limitations of the python type system.
Acceptance
However, it is possible to compromise on certain features, like generics, or arbitrary partitioning of input arguments.
The following implementation of curry works in pyright 1.1.128.
from typing import TypeVar, Callable, List, Optional, Union
R = TypeVar("R", covariant=True)
S = TypeVar("S", contravariant=True)
T = TypeVar("T", contravariant=True)
def curry(f: Callable[[T, S], R]) -> Callable[[T], Callable[[S], R]]:
raise Exception()
X = TypeVar("X")
Y = TypeVar("Y")
def function(x: X, y: X) -> X:
raise Exception()
def to_optional(x: X) -> Optional[X]:
raise Exception()
def map(f: Callable[[X], Y], xs: List[X]) -> List[Y]:
raise Exception()
i: int = curry(function)(4)(5)
s: List[Optional[Union[str, int]]] = curry(map)(to_optional)(["dennis", 4])
First things first, I wouldn't make it a decorator, I'd wrap it as curry(foo). I find it confusing to look at an API where the decorated function signature is different to its initial definition.
On the subject of types, I would be very impressed if the general case is possible with Python type hints. I'm not sure how I'd even do it in Scala. You can do a limited number of cases, using overload for functions of two parameters as
T1 = TypeVar("T1")
T2 = TypeVar("T2")
U = TypeVar("U")
#overload
def curry(
func: Callable[[T1, T2], U],
max_argc: Optional[int]
) -> Callable[[T1], Callable[[T2], U]]:
...
adding versions for one, three, four parameters etc. Functions with lots of parameters are code smells anyway, with the exception of varargs, which I'm not sure if it even makes sense to curry.
Perhaps as a remnant of my days with a strongly-typed language (Java), I often find myself writing functions and then forcing type checks. For example:
def orSearch(d, query):
assert (type(d) == dict)
assert (type(query) == list)
Should I keep doing this? what are the advantages to doing/not doing this?
Stop doing that.
The point of using a "dynamic" language (that is strongly typed as to values*, untyped as to variables, and late bound) is that your functions can be properly polymorphic, in that they will cope with any object which supports the interface your function relies on ("duck typing").
Python defines a number of common protocols (e.g. iterable) which different types of object may implement without being related to each other. Protocols are not per se a language feature (unlike a java interface).
The practical upshot of this is that in general, as long as you understand the types in your language, and you comment appropriately (including with docstrings, so other people also understand the types in your programme), you can generally write less code, because you don't have to code around your type system. You won't end up writing the same code for different types, just with different type declarations (even if the classes are in disjoint hierarchies), and you won't have to figure out which casts are safe and which are not, if you want to try to write just the one piece of code.
There are other languages that theoretically offer the same thing: type inferred languages. The most popular are C++ (using templates) and Haskell. In theory (and probably in practice), you can end up writing even less code, because types are resolved statically, so you won't have to write exception handlers to deal with being passed the wrong type. I find that they still require you to programme to the type system, rather than to the actual types in your programme (their type systems are theorem provers, and to be tractable, they don't analyse your whole programme). If that sounds great to you, consider using one of those languages instead of python (or ruby, smalltalk, or any variant of lisp).
Instead of type testing, in python (or any similar dynamic language) you'll want to use exceptions to catch when an object does not support a particular method. In that case, either let it go up the stack, or catch it, and raise your exception about an improper type. This type of "better to ask forgiveness than permission" coding is idiomatic python, and greatly contributes to simpler code.
* In practice. Class changes are possible in Python and Smalltalk, but rare. It's also not the same as casting in a low level language.
Update: You can use mypy to statically check your python outside of production. Annotating your code so they can check that their code is consistent lets them do that if they want; or yolo it if they want.
In most of the cases it would interfere with duck typing and with inheritance.
Inheritance: You certainly intended to write something with the effect of
assert isinstance(d, dict)
to make sure that your code also works correctly with subclasses of dict. This is similar to the usage in Java, I think. But Python has something that Java has not, namely
Duck typing: most built-in functions do not require that an object belongs to a specific class, only that it has certain member functions that behave in the right way. The for loop, e.g., does only require that the loop variable is an iterable, which means that it has the member functions __iter__() and next(), and they behave correctly.
Therefore, if you do not want to close the door to the full power of Python, do not check for specific types in your production code. (It might be useful for debugging, nevertheless.)
If you insist on adding type checking to your code, you may want to look into annotations and how they might simplify what you have to write. One of the questions on StackOverflow introduced a small, obfuscated type-checker taking advantage of annotations. Here is an example based on your question:
>>> def statictypes(a):
def b(a, b, c):
if b in a and not isinstance(c, a[b]): raise TypeError('{} should be {}, not {}'.format(b, a[b], type(c)))
return c
return __import__('functools').wraps(a)(lambda *c: b(a.__annotations__, 'return', a(*[b(a.__annotations__, *d) for d in zip(a.__code__.co_varnames, c)])))
>>> #statictypes
def orSearch(d: dict, query: dict) -> type(None):
pass
>>> orSearch({}, {})
>>> orSearch([], {})
Traceback (most recent call last):
File "<pyshell#162>", line 1, in <module>
orSearch([], {})
File "<pyshell#155>", line 5, in <lambda>
return __import__('functools').wraps(a)(lambda *c: b(a.__annotations__, 'return', a(*[b(a.__annotations__, *d) for d in zip(a.__code__.co_varnames, c)])))
File "<pyshell#155>", line 5, in <listcomp>
return __import__('functools').wraps(a)(lambda *c: b(a.__annotations__, 'return', a(*[b(a.__annotations__, *d) for d in zip(a.__code__.co_varnames, c)])))
File "<pyshell#155>", line 3, in b
if b in a and not isinstance(c, a[b]): raise TypeError('{} should be {}, not {}'.format(b, a[b], type(c)))
TypeError: d should be <class 'dict'>, not <class 'list'>
>>> orSearch({}, [])
Traceback (most recent call last):
File "<pyshell#163>", line 1, in <module>
orSearch({}, [])
File "<pyshell#155>", line 5, in <lambda>
return __import__('functools').wraps(a)(lambda *c: b(a.__annotations__, 'return', a(*[b(a.__annotations__, *d) for d in zip(a.__code__.co_varnames, c)])))
File "<pyshell#155>", line 5, in <listcomp>
return __import__('functools').wraps(a)(lambda *c: b(a.__annotations__, 'return', a(*[b(a.__annotations__, *d) for d in zip(a.__code__.co_varnames, c)])))
File "<pyshell#155>", line 3, in b
if b in a and not isinstance(c, a[b]): raise TypeError('{} should be {}, not {}'.format(b, a[b], type(c)))
TypeError: query should be <class 'dict'>, not <class 'list'>
>>>
You might look at the type-checker and wonder, "What on earth is that doing?" I decided to find out for myself and turned it into readable code. The second draft eliminated the b function (you could call it verify). The third and final draft made a few improvements and is shown down below for your use:
import functools
def statictypes(func):
template = '{} should be {}, not {}'
#functools.wraps(func)
def wrapper(*args):
for name, arg in zip(func.__code__.co_varnames, args):
klass = func.__annotations__.get(name, object)
if not isinstance(arg, klass):
raise TypeError(template.format(name, klass, type(arg)))
result = func(*args)
klass = func.__annotations__.get('return', object)
if not isinstance(result, klass):
raise TypeError(template.format('return', klass, type(result)))
return result
return wrapper
Edit:
It has been over four years since this answer was written, and a lot has changed in Python since that time. As a result of those changes and personal growth in the language, it seems beneficial to revisit the type-checking code and rewrite it to take advantage of new features and improved coding technique. Therefore, the following revision is provided that makes a few marginal improvements to the statictypes (now renamed static_types) function decorator.
#! /usr/bin/env python3
import functools
import inspect
def static_types(wrapped):
def replace(obj, old, new):
return new if obj is old else obj
signature = inspect.signature(wrapped)
parameter_values = signature.parameters.values()
parameter_names = tuple(parameter.name for parameter in parameter_values)
parameter_types = tuple(
replace(parameter.annotation, parameter.empty, object)
for parameter in parameter_values
)
return_type = replace(signature.return_annotation, signature.empty, object)
#functools.wraps(wrapped)
def wrapper(*arguments):
for argument, parameter_type, parameter_name in zip(
arguments, parameter_types, parameter_names
):
if not isinstance(argument, parameter_type):
raise TypeError(f'{parameter_name} should be of type '
f'{parameter_type.__name__}, not '
f'{type(argument).__name__}')
result = wrapped(*arguments)
if not isinstance(result, return_type):
raise TypeError(f'return should be of type '
f'{return_type.__name__}, not '
f'{type(result).__name__}')
return result
return wrapper
This is a non-idiomatic way of doing things. Typically in Python you would use try/except tests.
def orSearch(d, query):
try:
d.get(something)
except TypeError:
print("oops")
try:
foo = query[:2]
except TypeError:
print("durn")
Personally I have an aversion to asserts it seems that the programmer could see trouble coming but couldn't be bothered to think about how to handle them, the other problem is that your example will assert if either parameter is a class derived from the ones you are expecting even though such classes should work! - in your example above I would go for something like:
def orSearch(d, query):
""" Description of what your function does INCLUDING parameter types and descriptions """
result = None
if not isinstance(d, dict) or not isinstance(query, list):
print "An Error Message"
return result
...
Note type only matches if the type is exactly as expected, isinstance works for derived classes as well. e.g.:
>>> class dd(dict):
... def __init__(self):
... pass
...
>>> d1 = dict()
>>> d2 = dd()
>>> type(d1)
<type 'dict'>
>>> type(d2)
<class '__main__.dd'>
>>> type (d1) == dict
True
>>> type (d2) == dict
False
>>> isinstance(d1, dict)
True
>>> isinstance(d2, dict)
True
>>>
You could consider throwing a custom exception rather than an assert. You could even generalise even more by checking that the parameters have the methods that you need.
BTW It may be finicky of me but I always try to avoid assert in C/C++ on the grounds that if it stays in the code then someone in a few years time will make a change that ought to be caught by it, not test it well enough in debug for that to happen, (or even not test it at all), compile as deliverable, release mode, - which removes all asserts i.e. all the error checking that was done that way and now we have unreliable code and a major headache to find the problems.
I agree with Steve's approach when you need to do type checking. I don't often find the need to do type checking in Python, but there is at least one situation where I do. That is where not checking the type could return an incorrect answer that will cause an error later in computation. These kinds of errors can be difficult to track down, and I've experienced them a number of times in Python. Like you, I learned Java first, and didn't have to deal with them often.
Let's say you had a simple function that expects an array and returns the first element.
def func(arr): return arr[0]
if you call it with an array, you will get the first element of the array.
>>> func([1,2,3])
1
You will also get a response if you call it with a string or an object of any class that implements the getitem magic method.
>>> func("123")
'1'
This would give you a response, but in this case it's of the wrong type. This can happen with objects that have the same method signature. You may not discover the error until much later in computation. If you do experience this in your own code, it usually means that there was an error in prior computation, but having the check there would catch it earlier. However, if you're writing a python package for others, it's probably something you should consider regardless.
You should not incur a large performance penalty for the check, but it will make your code more difficult to read, which is a big thing in the Python world.
Two things.
First, if you're willing to spend ~$200, you can get a pretty good python IDE. I use PyCharm and have been really impressed. (It's by the same people who make ReSharper for C#.) It will analyze your code as you write it, and look for places where variables are of the wrong type (among a pile of other things).
Second:
Before I used PyCharm, I ran in to the same problem--namely, I'd forget about the specific signatures of functions I wrote. I may have found this somewhere, but maybe I wrote it (I can't remember now). But anyway it's a decorator that you can use around your function definitions that does the type checking for you.
Call it like this
#require_type('paramA', str)
#require_type('paramB', list)
#require_type('paramC', collections.Counter)
def my_func(paramA, paramB, paramC):
paramB.append(paramC[paramA].most_common())
return paramB
Anyway, here's the code of the decorator.
def require_type(my_arg, *valid_types):
'''
A simple decorator that performs type checking.
#param my_arg: string indicating argument name
#param valid_types: *list of valid types
'''
def make_wrapper(func):
if hasattr(func, 'wrapped_args'):
wrapped = getattr(func, 'wrapped_args')
else:
body = func.func_code
wrapped = list(body.co_varnames[:body.co_argcount])
try:
idx = wrapped.index(my_arg)
except ValueError:
raise(NameError, my_arg)
def wrapper(*args, **kwargs):
def fail():
all_types = ', '.join(str(typ) for typ in valid_types)
raise(TypeError, '\'%s\' was type %s, expected to be in following list: %s' % (my_arg, all_types, type(arg)))
if len(args) > idx:
arg = args[idx]
if not isinstance(arg, valid_types):
fail()
else:
if my_arg in kwargs:
arg = kwargs[my_arg]
if not isinstance(arg, valid_types):
fail()
return func(*args, **kwargs)
wrapper.wrapped_args = wrapped
return wrapper
return make_wrapper
I know it's not Pythonic to write functions that care about the type of the arguments, but there are cases when it's simply impossible to ignore types because they are handled differently.
Having a bunch of isinstance checks in your function is just ugly; is there any function decorator available that enables function overloads? Something like this:
#overload(str)
def func(val):
print('This is a string')
#overload(int)
def func(val):
print('This is an int')
Update:
Here's some comments I left on David Zaslavsky's answer:
With a few modification[s], this will suit my purposes pretty well. One other limitation I noticed in your implementation, since you use func.__name__ as the dictionary key, you are prone to name collisions between modules, which is not always desirable. [cont'd]
[cont.] For example, if I have one module that overloads func, and another completely unrelated module that also overloads func, these overloads will collide because the function dispatch dict is global. That dict should be made local to the module, somehow. And not only that, it should also support some kind of 'inheritance'. [cont'd]
[cont.] By 'inheritance' I mean this: say I have a module first with some overloads. Then two more modules that are unrelated but each import first; both of these modules add new overloads to the already existing ones that they just imported. These two modules should be able to use the overloads in first, but the new ones that they just added should not collide with each other between modules. (This is actually pretty hard to do right, now that I think about it.)
Some of these problems could possibly be solved by changing the decorator syntax a little bit:
first.py
#overload(str, str)
def concatenate(a, b):
return a + b
#concatenate.overload(int, int)
def concatenate(a, b):
return str(a) + str(b)
second.py
from first import concatenate
#concatenate.overload(float, str)
def concatenate(a, b):
return str(a) + b
Since Python 3.4 the functools module supports a #singledispatch decorator. It works like this:
from functools import singledispatch
#singledispatch
def func(val):
raise NotImplementedError
#func.register
def _(val: str):
print('This is a string')
#func.register
def _(val: int):
print('This is an int')
Usage
func("test") --> "This is a string"
func(1) --> "This is an int"
func(None) --> NotImplementedError
Quick answer: there is an overload package on PyPI which implements this more robustly than what I describe below, although using a slightly different syntax. It's declared to work only with Python 3 but it looks like only slight modifications (if any, I haven't tried) would be needed to make it work with Python 2.
Long answer: In languages where you can overload functions, the name of a function is (either literally or effectively) augmented by information about its type signature, both when the function is defined and when it is called. When a compiler or interpreter looks up the function definition, then, it uses both the declared name and the types of the parameters to resolve which function to access. So the logical way to implement overloading in Python is to implement a wrapper that uses both the declared name and the parameter types to resolve the function.
Here's a simple implementation:
from collections import defaultdict
def determine_types(args, kwargs):
return tuple([type(a) for a in args]), \
tuple([(k, type(v)) for k,v in kwargs.iteritems()])
function_table = defaultdict(dict)
def overload(arg_types=(), kwarg_types=()):
def wrap(func):
named_func = function_table[func.__name__]
named_func[arg_types, kwarg_types] = func
def call_function_by_signature(*args, **kwargs):
return named_func[determine_types(args, kwargs)](*args, **kwargs)
return call_function_by_signature
return wrap
overload should be called with two optional arguments, a tuple representing the types of all positional arguments and a tuple of tuples representing the name-type mappings of all keyword arguments. Here's a usage example:
>>> #overload((str, int))
... def f(a, b):
... return a * b
>>> #overload((int, int))
... def f(a, b):
... return a + b
>>> print f('a', 2)
aa
>>> print f(4, 2)
6
>>> #overload((str,), (('foo', int), ('bar', float)))
... def g(a, foo, bar):
... return foo*a + str(bar)
>>> #overload((str,), (('foo', float), ('bar', float)))
... def g(a, foo, bar):
... return a + str(foo*bar)
>>> print g('a', foo=7, bar=4.4)
aaaaaaa4.4
>>> print g('b', foo=7., bar=4.4)
b30.8
Shortcomings of this include
It doesn't actually check that the function the decorator is applied to is even compatible with the arguments given to the decorator. You could write
#overload((str, int))
def h():
return 0
and you'd get an error when the function was called.
It doesn't gracefully handle the case where no overloaded version exists corresponding to the types of the arguments passed (it would help to raise a more descriptive error)
It distinguishes between named and positional arguments, so something like
g('a', 7, bar=4.4)
doesn't work.
There are a lot of nested parentheses involved in using this, as in the definitions for g.
As mentioned in the comments, this doesn't deal with functions having the same name in different modules.
All of these could be remedied with enough fiddling, I think. In particular, the issue of name collisions is easily resolved by storing the dispatch table as an attribute of the function returned from the decorator. But as I said, this is just a simple example to demonstrate the basics of how to do it.
This doesn't directly answer your question, but if you really want to have something that behaves like an overloaded function for different types and (quite rightly) don't want to use isinstance then I'd suggest something like:
def func(int_val=None, str_val=None):
if sum(x != None for x in (int_val, str_val)) != 1:
#raise exception - exactly one value should be passed in
if int_val is not None:
print('This is an int')
if str_val is not None:
print('This is a string')
In use the intent is obvious, and it doesn't even require the different options to have different types:
func(int_val=3)
func(str_val="squirrel")
Yes, there is an overload decorator in the typing library that can be used to help make complex type hints easier.
from collections.abc import Sequence
from typing import overload
#overload
def double(input_: int) -> int:
...
#overload
def double(input_: Sequence[int]) -> list[int]:
...
def double(input_: int | Sequence[int]) -> int | list[int]:
if isinstance(input_, Sequence):
return [i * 2 for i in input_]
return input_ * 2
Check this link for more details.
Just noticed it is a 11 years old question, sorry to bring it up again. It was by mistake.
I have a Python function that takes a numeric argument that must be an integer in order for it behave correctly. What is the preferred way of verifying this in Python?
My first reaction is to do something like this:
def isInteger(n):
return int(n) == n
But I can't help thinking that this is 1) expensive 2) ugly and 3) subject to the tender mercies of machine epsilon.
Does Python provide any native means of type checking variables? Or is this considered to be a violation of the language's dynamically typed design?
EDIT: since a number of people have asked - the application in question works with IPv4 prefixes, sourcing data from flat text files. If any input is parsed into a float, that record should be viewed as malformed and ignored.
isinstance(n, int)
If you need to know whether it's definitely an actual int and not a subclass of int (generally you shouldn't need to do this):
type(n) is int
this:
return int(n) == n
isn't such a good idea, as cross-type comparisons can be true - notably int(3.0)==3.0
Yeah, as Evan said, don't type check. Just try to use the value:
def myintfunction(value):
""" Please pass an integer """
return 2 + value
That doesn't have a typecheck. It is much better! Let's see what happens when I try it:
>>> myintfunction(5)
7
That works, because it is an integer. Hm. Lets try some text.
>>> myintfunction('text')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in myintfunction
TypeError: unsupported operand type(s) for +: 'int' and 'str'
It shows an error, TypeError, which is what it should do anyway. If caller wants to catch that, it is possible.
What would you do if you did a typecheck? Show an error right? So you don't have to typecheck because the error is already showing up automatically.
Plus since you didn't typecheck, you have your function working with other types:
Floats:
>>> print myintfunction(2.2)
4.2
Complex numbers:
>>> print myintfunction(5j)
(2+5j)
Decimals:
>>> import decimal
>>> myintfunction(decimal.Decimal('15'))
Decimal("17")
Even completely arbitrary objects that can add numbers!
>>> class MyAdderClass(object):
... def __radd__(self, value):
... print 'got some value: ', value
... return 25
...
>>> m = MyAdderClass()
>>> print myintfunction(m)
got some value: 2
25
So you clearly get nothing by typechecking. And lose a lot.
UPDATE:
Since you've edited the question, it is now clear that your application calls some upstream routine that makes sense only with ints.
That being the case, I still think you should pass the parameter as received to the upstream function. The upstream function will deal with it correctly e.g. raising an error if it needs to. I highly doubt that your function that deals with IPs will behave strangely if you pass it a float. If you can give us the name of the library we can check that for you.
But... If the upstream function will behave incorrectly and kill some kids if you pass it a float (I still highly doubt it), then just just call int() on it:
def myintfunction(value):
""" Please pass an integer """
return upstreamfunction(int(value))
You're still not typechecking, so you get most benefits of not typechecking.
If even after all that, you really want to type check, despite it reducing your application's readability and performance for absolutely no benefit, use an assert to do it.
assert isinstance(...)
assert type() is xxxx
That way we can turn off asserts and remove this <sarcasm>feature</sarcasm> from the program by calling it as
python -OO program.py
Python now supports gradual typing via the typing module and mypy. The typing module is a part of the stdlib as of Python 3.5 and can be downloaded from PyPi if you need backports for Python 2 or previous version of Python 3. You can install mypy by running pip install mypy from the command line.
In short, if you want to verify that some function takes in an int, a float, and returns a string, you would annotate your function like so:
def foo(param1: int, param2: float) -> str:
return "testing {0} {1}".format(param1, param2)
If your file was named test.py, you could then typecheck once you've installed mypy by running mypy test.py from the command line.
If you're using an older version of Python without support for function annotations, you can use type comments to accomplish the same effect:
def foo(param1, param2):
# type: (int, float) -> str
return "testing {0} {1}".format(param1, param2)
You use the same command mypy test.py for Python 3 files, and mypy --py2 test.py for Python 2 files.
The type annotations are ignored entirely by the Python interpreter at runtime, so they impose minimal to no overhead -- the usual workflow is to work on your code and run mypy periodically to catch mistakes and errors. Some IDEs, such as PyCharm, will understand type hints and can alert you to problems and type mismatches in your code while you're directly editing.
If, for some reason, you need the types to be checked at runtime (perhaps you need to validate a lot of input?), you should follow the advice listed in the other answers -- e.g. use isinstance, issubclass, and the like. There are also some libraries such as enforce that attempt to perform typechecking (respecting your type annotations) at runtime, though I'm uncertain how production-ready they are as of time of writing.
For more information and details, see the mypy website, the mypy FAQ, and PEP 484.
if type(n) is int
This checks if n is a Python int, and only an int. It won't accept subclasses of int.
Type-checking, however, does not fit the "Python way". You better use n as an int, and if it throws an exception, catch it and act upon it.
Don't type check. The whole point of duck typing is that you shouldn't have to. For instance, what if someone did something like this:
class MyInt(int):
# ... extra stuff ...
Programming in Python and performing typechecking as you might in other languages does seem like choosing a screwdriver to bang a nail in with. It is more elegant to use Python's exception handling features.
From an interactive command line, you can run a statement like:
int('sometext')
That will generate an error - ipython tells me:
<type 'exceptions.ValueError'>: invalid literal for int() with base 10: 'sometext'
Now you can write some code like:
try:
int(myvar) + 50
except ValueError:
print "Not a number"
That can be customised to perform whatever operations are required AND to catch any errors that are expected. It looks a bit convoluted but fits the syntax and idioms of Python and results in very readable code (once you become used to speaking Python).
I would be tempted to to something like:
def check_and_convert(x):
x = int(x)
assert 0 <= x <= 255, "must be between 0 and 255 (inclusive)"
return x
class IPv4(object):
"""IPv4 CIDR prefixes is A.B.C.D/E where A-D are
integers in the range 0-255, and E is an int
in the range 0-32."""
def __init__(self, a, b, c, d, e=0):
self.a = check_and_convert(a)
self.b = check_and_convert(a)
self.c = check_and_convert(a)
self.d = check_and_convert(a)
assert 0 <= x <= 32, "must be between 0 and 32 (inclusive)"
self.e = int(e)
That way when you are using it anything can be passed in yet you only store a valid integer.
how about:
def ip(string):
subs = string.split('.')
if len(subs) != 4:
raise ValueError("incorrect input")
out = tuple(int(v) for v in subs if 0 <= int(v) <= 255)
if len(out) != 4:
raise ValueError("incorrect input")
return out
ofcourse there is the standard isinstance(3, int) function ...
For those who are looking to do this with assert() function. Here is how you can efficiently place the variable type check in your code without defining any additional functions. This will prevent your code from running if the assert() error is raised.
assert(type(X) == int(0))
If no error was raised, code continues to work. Other than that, unittest module is a very useful tool for this sorts of things.