Consider the following:
>>> import numbers
>>> import numpy
>>> a = numpy.int_(0)
>>> isinstance(a, int)
False
>>> isinstance(a, numbers.Integral)
True
>>> b = numpy.float_(0)
>>> isinstance(b, float)
True
>>> isinstance(b, numbers.Real)
True
NumPy's numpy.int_ and numpy.float_ types are both in Python's numeric abstract base class hierarchy, but it is strange to me that a np.int_ object is not an instance of the built-in int class, while a np.float_ object is an instance of the built-in float type.
Why is this the case?
Python integers can be arbitrary length: type(10**1000) is still int, and will print out a one and then a thousand zeros on your screen if you output it.
Numpy int64 (which is what int_ is on my machine) are integers represented by 8 bytes (64 bits), and anything over that cannot be represented. For example, np.int_(10)**1000 will give you a wrong answer - but quickly ;).
Thus, they are different kinds of numbers; subclassing one under the other makes as much sense as subclassing int under float would, is what I assume numpy people thought. It is best to keep them separate, so that no-one is confused about the fact that it would be unwise to confuse them.
The split is done because arbitrary-size integers are slow, while numpy tries to speed up computation by sticking to machine-friendly types.
On the other hand, floating point is the standard IEEE floating point, both in Python and in numpy, supported out-of-the-box by our processors.
Because numpy.int_() is actually 64-bit, and int can have an arbitrary size, it uses about 4 extra bytes for every 2^30 worth of bits you put in. int64 has constant size:
>>> import numpy as np
>>> a = np.int_(0)
>>> type(a)
<type 'numpy.int64'>
>>> b = 0
>>> type(b)
<type 'int'>
I am studying the Python built-in method int(x), which casts a variable to int. The documentation is at https://docs.python.org/2/library/functions.html#int.
In a code I found:
errors += int(update != 0.0)
This code simply increases or decreases an error variable.
What i see is a conditional as a variable, even though the method doesn't take Boolean values. How is this possible?
Consider two possibilities:
int(True) and int(False)
First case will evaluate to 1 and second to 0
hence, errors will either increase by 1 or by 0
refer to the doc
Boolean values are the two constant objects False and True. They are used to represent truth values (although other values can also be considered false or true). In numeric contexts (for example when used as the argument to an arithmetic operator), they behave like the integers 0 and 1, respectively. The built-in function bool() can be used to convert any value to a Boolean, if the value can be interpreted as a truth value (see section Truth Value Testing above).
The Python 3 documentation is ever so slightly more straightforward than the 2 documentation here, so I’ll quote it:
Return an integer object constructed from a number or string x, or return 0 if no arguments are given. If x is a number, return x.__int__().
So int accepts a string or a number. Booleans aren’t strings, but they are in fact numbers! Following the link to Numeric Types – int, float, complex explains that…
There are three distinct numeric types: integers, floating point numbers, and complex numbers. In addition, Booleans are a subtype of integers.
which you can confirm in the REPL:
>>> import numbers
>>> isinstance(False, int)
True
>>> isinstance(True, numbers.Numeric)
True
and by doing math with booleans, which act as the integer values 0 and 1 as one would expect:
>>> True * 5
5
>>> math.acos(False)
1.5707963267948966
Booleans are a sub class of Integers in python and internally and False is represented as 0 in Python.
I have a some numbers like:
1.8816764231589208e-06 <type 'float'>
how I can convert to
0.00000018816764231589208
Preserving all precision
This is not an easy problem, because binary floating point numbers cannot always represent decimal fractions exactly, and the number you have chosen is one such.
Therefore, you need to know how many digits of precision you want. In your exact case, see what happens when I try to print it with various formats.
>>> x = 1.8816764231589208e-06
>>> for i in range(10, 30):
... fmt = "{:.%df}" % i
... print fmt, fmt.format(x)
...
{:.10f} 0.0000018817
{:.11f} 0.00000188168
{:.12f} 0.000001881676
{:.13f} 0.0000018816764
{:.14f} 0.00000188167642
{:.15f} 0.000001881676423
{:.16f} 0.0000018816764232
{:.17f} 0.00000188167642316
{:.18f} 0.000001881676423159
{:.19f} 0.0000018816764231589
{:.20f} 0.00000188167642315892
{:.21f} 0.000001881676423158921
{:.22f} 0.0000018816764231589208
{:.23f} 0.00000188167642315892079
{:.24f} 0.000001881676423158920791
{:.25f} 0.0000018816764231589207915
{:.26f} 0.00000188167642315892079146
{:.27f} 0.000001881676423158920791458
{:.28f} 0.0000018816764231589207914582
{:.29f} 0.00000188167642315892079145823
>>>
As you will observe, Python is happy to provide many digits of precision, but in fact the later ones are spurious: a standard Python float is stored in 64 bits, but only 52 of those are used to represent the significant figures, meaning you can get at most 16 significant digits.
The real lesson here is that Python has no way to exactly store 1.8816764231589208e-06 as a floating point number. This is not so much a language limitation as a representational limitation of the floating-point implementation.
The formatting shown above may, however, allow you to solve your problem.
The value you presented is not properly stored as Rory Daulton suggests in his comment. So your float 1.8816764231589208e-06 <type 'float'> could be explained by this example:
>>> from decimal import Decimal
>>> a = 1.8816764231589208e-06
>>> g = 0.0000018816764231589208 # g == a
>>> Decimal(a) # Creates a Decimal object with the passed float
Decimal('0.000001881676423158920791458225373060653140555587015114724636077880859375')
>>> Decimal('0.0000018816764231589208') # Exact value stored using str
Decimal('0.0000018816764231589208')
>>> Decimal(a) == Decimal('0.0000018816764231589208')
False # See the problem here? Your float did not
# represent the number you "saw"
>>> Decimal(a).__float__()
1.8816764231589208e-06
>>> Decimal(a).__float__() == a
True
If you want precise decimals, use Decimal or some other class to represent numbers rather than binary representations such as float. Your 0.0000018816764231589208 of type float, is actually the number shown by Decimal().
This question already has answers here:
Python float - str - float weirdness
(4 answers)
Closed 9 years ago.
This question is more for curiosity.
I'm creating the following array:
A = zeros((2,2))
for i in range(2):
A[i,i] = 0.6
A[(i+1)%2,i] = 0.4
print A
>>>
[[ 0.6 0.4]
[ 0.4 0.6]]
Then, printing it:
for i,c in enumerate(A):
for j,d in enumerate(c):
print j, d
But, if I remove the j, I got:
>>>
0 0.6
1 0.4
0 0.4
1 0.6
But if I remove the j from the for, I got:
(0, 0.59999999999999998)
(1, 0.40000000000000002)
(0, 0.40000000000000002)
(1, 0.59999999999999998)
It because the way I'm creating the matrix, using 0.6? How does it represent internally real values?
There are a few different things going on here.
First, Python has two mechanisms for turning an object into a string, called repr and str. repr is supposed to give 'faithful' output that would (ideally) make it easy to recreate exactly that object, while str aims for more human-readable output. For floats in Python versions up to and including Python 3.1, repr gives enough digits to determine the value of the float completely (so that evaluating the returned string gives back exactly that float), while str rounds to 12 decimal places; this has the effect of hiding inaccuracies, but means that two distinct floats that are very close together can end up with the same str value - something that can't happen with repr. When you print an object, you get the str of that object. In contrast, when you just evaluate an expression at the interpreter prompt, you get the repr.
For example (here using Python 2.7):
>>> x = 1.0 / 7.0
>>> str(x)
'0.142857142857'
>>> repr(x)
'0.14285714285714285'
>>> print x # print uses 'str'
0.142857142857
>>> x # the interpreter read-eval-print loop uses 'repr'
0.14285714285714285
But also, a little bit confusingly from your point of view, we get:
>>> x = 0.4
>>> str(x)
'0.4'
>>> repr(x)
'0.4'
That doesn't seem to tie in too well with what you were seeing above, but we'll come back to this below.
The second thing to bear in mind is that in your first example, you're printing two separate items, while in your second example (with the j removed), you're printing a single item: a tuple of length 2. Somewhat surprisingly, when converting a tuple for printing with str, Python nevertheless uses repr to compute the string representation of the elements of that tuple:
>>> x = 1.0 / 7.0
>>> print x, x # print x twice; uses str(x)
0.142857142857 0.142857142857
>>> print(x, x) # print a single tuple; uses repr(x)
(0.14285714285714285, 0.14285714285714285)
That explains why you're seeing different results in the two cases, even though the underlying floats are the same.
But there's one last piece to the puzzle. In Python >= 2.7, we saw above that for the particular float 0.4, the str and repr of that float were the same. So where does the 0.40000000000000002 come from? Well, you don't have Python floats here: because you're getting these values from a NumPy array, they're actually of type numpy.float64:
>>> from numpy import zeros
>>> A = zeros((2, 2))
>>> A[:] = [[0.6, 0.4], [0.4, 0.6]]
>>> A
array([[ 0.6, 0.4],
[ 0.4, 0.6]])
>>> type(A[0, 0])
<type 'numpy.float64'>
That type still stores a double-precision float, just like Python's float, but it's got some extra goodies that make it interact nicely with the rest of NumPy. And it turns out that NumPy uses a slightly different algorithm for computing the repr of a numpy.float64 than Python uses for computing the repr of a float. Python (in versions >= 2.7) aims to give the shortest string that still gives an accurate representation of the float, while NumPy simply outputs a string based on rounding the underlying value to 17 significant digits. Going back to that 0.4 example above, here's what NumPy does:
>>> from numpy import float64
>>> x = float64(1.0 / 7.0)
>>> str(x)
'0.142857142857'
>>> repr(x)
'0.14285714285714285'
>>> x = float64(0.4)
>>> str(x)
'0.4'
>>> repr(x)
'0.40000000000000002'
So these three things together should explain the results you're seeing. Rest assured that this is all completely cosmetic: the underlying floating-point value is not being changed in any way; it's just being displayed differently by the four different possible combinations of str and repr for the two types: float and numpy.float64.
The Python tutorial give more details of how Python floats are stored and displayed, together with some of the potential pitfalls. The answers to this SO question have more information on the difference between str and repr.
Edit:
Don't mind me, I failed to realise that the question was about NumPy.
The strange 0.59999999999999998 and friends is Python's best attempt to accurately represent how all computers store floating point values: as a bunch of bits, according to the IEEE 754 standard. Notably, 0.1 is a non-terminating decimal in binary, and so cannot be stored exactly. (So, presumably, are 0.6 and 0.4.)
The reason you normally see 0.6 is most floating-point printing functions round off these imprecisely-stored floats, to make them more understandable to us humans. That's what your first printing example is doing.
Under some circumstances (that is, when the printing functions aren't trying for human-readable), the full, slightly-off number 0.59999999999999998 will be printed. That's what your second printing example is doing.
tl;dr
This is not Python's fault; it is just how floats are stored.
How do I see the type of a variable? (e.g. unsigned 32 bit)
Use the type() builtin function:
>>> i = 123
>>> type(i)
<type 'int'>
>>> type(i) is int
True
>>> i = 123.456
>>> type(i)
<type 'float'>
>>> type(i) is float
True
To check if a variable is of a given type, use isinstance:
>>> i = 123
>>> isinstance(i, int)
True
>>> isinstance(i, (float, str, set, dict))
False
Note that Python doesn't have the same types as C/C++, which appears to be your question.
You may be looking for the type() built-in function.
See the examples below, but there's no "unsigned" type in Python just like Java.
Positive integer:
>>> v = 10
>>> type(v)
<type 'int'>
Large positive integer:
>>> v = 100000000000000
>>> type(v)
<type 'long'>
Negative integer:
>>> v = -10
>>> type(v)
<type 'int'>
Literal sequence of characters:
>>> v = 'hi'
>>> type(v)
<type 'str'>
Floating point integer:
>>> v = 3.14159
>>> type(v)
<type 'float'>
It is so simple. You do it like this.
print(type(variable_name))
How to determine the variable type in Python?
So if you have a variable, for example:
one = 1
You want to know its type?
There are right ways and wrong ways to do just about everything in Python. Here's the right way:
Use type
>>> type(one)
<type 'int'>
You can use the __name__ attribute to get the name of the object. (This is one of the few special attributes that you need to use the __dunder__ name to get to - there's not even a method for it in the inspect module.)
>>> type(one).__name__
'int'
Don't use __class__
In Python, names that start with underscores are semantically not a part of the public API, and it's a best practice for users to avoid using them. (Except when absolutely necessary.)
Since type gives us the class of the object, we should avoid getting this directly. :
>>> one.__class__
This is usually the first idea people have when accessing the type of an object in a method - they're already looking for attributes, so type seems weird. For example:
class Foo(object):
def foo(self):
self.__class__
Don't. Instead, do type(self):
class Foo(object):
def foo(self):
type(self)
Implementation details of ints and floats
How do I see the type of a variable whether it is unsigned 32 bit, signed 16 bit, etc.?
In Python, these specifics are implementation details. So, in general, we don't usually worry about this in Python. However, to sate your curiosity...
In Python 2, int is usually a signed integer equal to the implementation's word width (limited by the system). It's usually implemented as a long in C. When integers get bigger than this, we usually convert them to Python longs (with unlimited precision, not to be confused with C longs).
For example, in a 32 bit Python 2, we can deduce that int is a signed 32 bit integer:
>>> import sys
>>> format(sys.maxint, '032b')
'01111111111111111111111111111111'
>>> format(-sys.maxint - 1, '032b') # minimum value, see docs.
'-10000000000000000000000000000000'
In Python 3, the old int goes away, and we just use (Python's) long as int, which has unlimited precision.
We can also get some information about Python's floats, which are usually implemented as a double in C:
>>> sys.float_info
sys.floatinfo(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308,
min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15,
mant_dig=53, epsilon=2.2204460492503131e-16, radix=2, rounds=1)
Conclusion
Don't use __class__, a semantically nonpublic API, to get the type of a variable. Use type instead.
And don't worry too much about the implementation details of Python. I've not had to deal with issues around this myself. You probably won't either, and if you really do, you should know enough not to be looking to this answer for what to do.
print type(variable_name)
I also highly recommend the IPython interactive interpreter when dealing with questions like this. It lets you type variable_name? and will return a whole list of information about the object including the type and the doc string for the type.
e.g.
In [9]: var = 123
In [10]: var?
Type: int
Base Class: <type 'int'>
String Form: 123
Namespace: Interactive
Docstring:
int(x[, base]) -> integer
Convert a string or number to an integer, if possible. A floating point argument will be truncated towards zero (this does not include a string
representation of a floating point number!) When converting a string, use the optional base. It is an error to supply a base when converting a
non-string. If the argument is outside the integer range a long object
will be returned instead.
a = "cool"
type(a)
//result 'str'
<class 'str'>
or
do
`dir(a)`
to see the list of inbuilt methods you can have on the variable.
One more way using __class__:
>>> a = [1, 2, 3, 4]
>>> a.__class__
<type 'list'>
>>> b = {'key1': 'val1'}
>>> b.__class__
<type 'dict'>
>>> c = 12
>>> c.__class__
<type 'int'>
Examples of simple type checking in Python:
assert type(variable_name) == int
assert type(variable_name) == bool
assert type(variable_name) == list
It may be little irrelevant. but you can check types of an object with isinstance(object, type) as mentioned here.
The question is somewhat ambiguous -- I'm not sure what you mean by "view". If you are trying to query the type of a native Python object, #atzz's answer will steer you in the right direction.
However, if you are trying to generate Python objects that have the semantics of primitive C-types, (such as uint32_t, int16_t), use the struct module. You can determine the number of bits in a given C-type primitive thusly:
>>> struct.calcsize('c') # char
1
>>> struct.calcsize('h') # short
2
>>> struct.calcsize('i') # int
4
>>> struct.calcsize('l') # long
4
This is also reflected in the array module, which can make arrays of these lower-level types:
>>> array.array('c').itemsize # char
1
The maximum integer supported (Python 2's int) is given by sys.maxint.
>>> import sys, math
>>> math.ceil(math.log(sys.maxint, 2)) + 1 # Signedness
32.0
There is also sys.getsizeof, which returns the actual size of the Python object in residual memory:
>>> a = 5
>>> sys.getsizeof(a) # Residual memory.
12
For float data and precision data, use sys.float_info:
>>> sys.float_info
sys.floatinfo(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.2204460492503131e-16, radix=2, rounds=1)
Do you mean in Python or using ctypes?
In the first case, you simply cannot - because Python does not have signed/unsigned, 16/32 bit integers.
In the second case, you can use type():
>>> import ctypes
>>> a = ctypes.c_uint() # unsigned int
>>> type(a)
<class 'ctypes.c_ulong'>
For more reference on ctypes, an its type, see the official documentation.
Python doesn't have such types as you describe. There are two types used to represent integral values: int, which corresponds to platform's int type in C, and long, which is an arbitrary precision integer (i.e. it grows as needed and doesn't have an upper limit). ints are silently converted to long if an expression produces result which cannot be stored in int.
Simple, for python 3.4 and above
print (type(variable_name))
Python 2.7 and above
print type(variable_name)
It really depends on what level you mean. In Python 2.x, there are two integer types, int (constrained to sys.maxint) and long (unlimited precision), for historical reasons. In Python code, this shouldn't make a bit of difference because the interpreter automatically converts to long when a number is too large. If you want to know about the actual data types used in the underlying interpreter, that's implementation dependent. (CPython's are located in Objects/intobject.c and Objects/longobject.c.) To find out about the systems types look at cdleary answer for using the struct module.
For python2.x, use
print type(variable_name)
For python3.x, use
print(type(variable_name))
You should use the type() function. Like so:
my_variable = 5
print(type(my_variable)) # Would print out <class 'int'>
This function will view the type of any variable, whether it's a list or a class. Check this website for more information: https://www.w3schools.com/python/ref_func_type.asp
Python is a dynamically typed language. A variable, initially created as a string, can be later reassigned to an integer or a float. And the interpreter won’t complain:
name = "AnyValue"
# Dynamically typed language lets you do this:
name = 21
name = None
name = Exception()
To check the type of a variable, you can use either type() or isinstance() built-in function. Let’s see them in action:
Python3 example:
variable = "hello_world"
print(type(variable) is str) # True
print(isinstance(variable, str)) # True
Let's compare both methods performances in python3
python3 -m timeit -s "variable = 'hello_world'" "type(variable) is int"
5000000 loops, best of 5: 54.5 nsec per loop
python3 -m timeit -s "variable = 'hello_world'" "isinstance(variable, str)"
10000000 loops, best of 5: 39.2 nsec per loop
type is 40% slower approximately (54.5/39.2 = 1.390).
We could use type(variable) == str instead. It would work, but it’s a bad idea:
== should be used when you want to check the value of a variable. We would use it to see if the value of the variable is equal to "hello_world". But when we want to check if the variable is a string, is the operator is more appropriate. For a more detailed explanation of when to use one or the other, check this article.
== is slower: python3 -m timeit -s "variable = 'hello_world'" "type(variable) == str" 5000000 loops, best of 5: 64.4 nsec per loop
Difference between isinstance and type
Speed is not the only difference between these two functions. There is actually an important distinction between how they work:
type only returns the type of an object (it's class). We can use it to check if the variable is of type str.
isinstance checks if a given object (first parameter) is:
an instance of a class specified as a second parameter. For example, is variable an instance of the str class?
or an instance of a subclass of a class specified as a second parameter. In other words - is variable an instance of a subclass of str?
What does it mean in practice? Let’s say we want to have a custom class that acts as a list but has some additional methods. So we might subclass the list type and add custom functions inside:
class MyAwesomeList(list):
# Add additional functions here
pass
But now the type and isinstance return different results if we compare this new class to a list!
my_list = MyAwesomeList()
print(type(my_list) is list) # False
print(isinstance(my_list, list)) # True
We get different results because isinstance checks if my_list is an instance of the list (it’s not) or a subclass of the list (it is because MyAwesomeList is a subclass of the list). If you forget about this difference, it can lead to some subtle bugs in your code.
Conclusions
isinstance is usually the preferred way to compare types. It’s not only faster but also considers inheritance, which is often the desired behavior. In Python, you usually want to check if a given object behaves like a string or a list, not necessarily if it’s exactly a string. So instead of checking for string and all its custom subclasses, you can just use isinstance.
On the other hand, when you want to explicitly check that a given variable is of a specific type (and not its subclass) - use type. And when you use it, use it like this: type(var) is some_type not like this: type(var) == some_type.
I saw this one when I was new to Python (I still am):
x = …
print(type(x))```
There's no 32bit and 64bit and 16bit, python is simple, you don't have to worry about it. See how to check the type:
integer = 1
print(type(integer)) # Result: <class 'int'>, and if it's a string then class will be str and so on.
# Checking the type
float_class = 1.3
print(isinstance(float_class, float)) # True
But if you really have to, you can use Ctypes library which has types like unsigned integer.
Ctypes types documentation
You can use it like this:
from ctypes import *
uint = c_uint(1) # Unsigned integer
print(uint) # Output: c_uint(1)
# To actually get the value, you have to call .value
print(uint.value)
# Change value
uint.value = 2
print(uint.value) # 2
There are many data types in python like:
Text Type: str
Numeric Types: int, float, complex
Sequence Types: list, tuple, range
Mapping Type: dict
Set Types: set, frozenset
Boolean Type: bool
Binary Types: bytes, bytearray, memoryview
None Type: NoneType
Here I have written a code having a list containing all type of data types example and printing their type
L = [
"Hello World",
20,
20.5,
1j,
["apple", "banana", "cherry"],
("apple", "banana", "cherry"),
range(6),
{"name" : "John", "age" : 36},
{"apple", "banana", "cherry"},
frozenset({"apple", "banana", "cherry"}),
True,
b"Hello",
bytearray(5),
memoryview(bytes(5)),
None
]
for _ in range(len(L)):
print(type(L[_]))
OUTPUT:
<class 'str'>
<class 'int'>
<class 'float'>
<class 'complex'>
<class 'list'>
<class 'tuple'>
<class 'range'>
<class 'dict'>
<class 'set'>
<class 'frozenset'>
<class 'bool'>
<class 'bytes'>
<class 'bytearray'>
<class 'memoryview'>
<class 'NoneType'>
Just do not do it. Asking for something's type is wrong in itself. Instead use polymorphism. Find or if necessary define by yourself the method that does what you want for any possible type of input and just call it without asking about anything. If you need to work with built-in types or types defined by a third-party library, you can always inherit from them and use your own derivatives instead. Or you can wrap them inside your own class. This is the object-oriented way to resolve such problems.
If you insist on checking exact type and placing some dirty ifs here and there, you can use __class__ property or type function to do it, but soon you will find yourself updating all these ifs with additional cases every two or three commits. Doing it the OO way prevents that and lets you only define a new class for a new type of input instead.