numpy.power(x1, x2) is not equal to x1**x2 [duplicate] - python

In [25]: np.power(10,-100)
Out[25]: 0
In [26]: math.pow(10,-100)
Out[26]: 1e-100
I would expect both the commands to return 1e-100. This is not a precision issue either, since the issue persists even after increasing precision to 500. Is there some setting which I can change to get the correct answer?

Oh, it's much "worse" than that:
In [2]: numpy.power(10,-1)
Out[2]: 0
But this is a hint to what's going on: 10 is an integer, and numpy.power doesn't coerce the numbers to floats. But this works:
In [3]: numpy.power(10.,-1)
Out[3]: 0.10000000000000001
In [4]: numpy.power(10.,-100)
Out[4]: 1e-100
Note, however, that the power operator, **, does convert to float:
In [5]: 10**-1
Out[5]: 0.1

numpy method assumes you want integer returned since you supplied an integer.
np.power(10.0,-100)
works as you would expect.

(Just a footnote to the two other answers on this page.)
Given input two input values, you can check the datatype of the object that np.power will return by inspecting the types attribute:
>>> np.power.types
['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q',
'QQ->Q', 'ee->e', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O']
Python-compatible integer types are denoted by l, compatible-compatible Python floats by d (documents).
np.power effectively decides what to return by checking the types of the arguments passed and using the first matching signature from this list.
So given 10 and -100, np.power matches the integer integer -> integer signature and returns the integer 0.
On the other hand, if one of the arguments is a float then the integer argument will also be cast to a float, and the float float -> float signature is used (and the correct float value is returned).

Related

Conditional inside int(x) method even though it doesn't take Boolean values

I am studying the Python built-in method int(x), which casts a variable to int. The documentation is at https://docs.python.org/2/library/functions.html#int.
In a code I found:
errors += int(update != 0.0)
This code simply increases or decreases an error variable.
What i see is a conditional as a variable, even though the method doesn't take Boolean values. How is this possible?
Consider two possibilities:
int(True) and int(False)
First case will evaluate to 1 and second to 0
hence, errors will either increase by 1 or by 0
refer to the doc
Boolean values are the two constant objects False and True. They are used to represent truth values (although other values can also be considered false or true). In numeric contexts (for example when used as the argument to an arithmetic operator), they behave like the integers 0 and 1, respectively. The built-in function bool() can be used to convert any value to a Boolean, if the value can be interpreted as a truth value (see section Truth Value Testing above).
The Python 3 documentation is ever so slightly more straightforward than the 2 documentation here, so I’ll quote it:
Return an integer object constructed from a number or string x, or return 0 if no arguments are given. If x is a number, return x.__int__().
So int accepts a string or a number. Booleans aren’t strings, but they are in fact numbers! Following the link to Numeric Types – int, float, complex explains that…
There are three distinct numeric types: integers, floating point numbers, and complex numbers. In addition, Booleans are a subtype of integers.
which you can confirm in the REPL:
>>> import numbers
>>> isinstance(False, int)
True
>>> isinstance(True, numbers.Numeric)
True
and by doing math with booleans, which act as the integer values 0 and 1 as one would expect:
>>> True * 5
5
>>> math.acos(False)
1.5707963267948966
Booleans are a sub class of Integers in python and internally and False is represented as 0 in Python.

What is the default value that Python chooses when adding two variables?

Suppose we have the following context:
x = 3
y = 4
z = x + y
Will z be an int or float, etc? I am well aware that floating numbers do end with .something, however its unclear to me whether Python will favour the floating point type over the integer type, given the circumstances that it is unpredictable whether the user is going to change this variable to another type or not.
When mixing types, you'll always get the 'wider' type, where a complex number is wider than float, which in turn is wider than an integer. When both values are the same type (so not mixing types), you will just get that one type as the result.
From the Numeric Types documentation:
Python fully supports mixed arithmetic: when a binary arithmetic operator has operands of different numeric types, the operand with the “narrower” type is widened to that of the other, where plain integer is narrower than long integer is narrower than floating point is narrower than complex. Comparisons between numbers of mixed type use the same rule.
So when you are summing numbers of different types, Python will widen the narrower type to be the same as the wider type. For an operation with an integer and a floating point number, you'll get a float, as float is wider.
However, there is no point in changing the type if both operands are the same type. Changing the type in that case would be very surprising.
For example:
>>> type(3 + 4)
<class 'int'>
>>> type(3.0 + 4)
<class 'float'>
>>> type(3.0 + 4.0)
<class 'float'>
If the difference between a float or integer is important to your application, explicitly convert the user input into a float so you know what you are dealing with every time.
user_input = float(input())
This is easy to test. By default, if you perform arithmetic on two integers, Python will store the result as an integer. You can see that here.
>>> x = 1
>>> y = 2
>>> z = x + y
>>> type(z)
<class 'int'>
The type function will tell you what the 'type' of an object is in Python. Here we can see that the type of the z variable is int, which stands for integer.
If we were to perform arithmetic on two different types, say integer and float, and the result was a decimal, then the result will be stored as a float.
>>> x = 2.55
>>> y = 3
>>> z = x / 3
>>> type(z)
<class 'float'>
Now, if you were to perform arithmetic on an integer and a float and the result were to come out as a non-decimal, Python will STILL store this as a float. You can see that here.
>>> x = 2.5
>>> y = 1.25
>>> z = x / y
>>> type(z)
<class 'float'>
>>> print(z)
2.0
In conclusion, when you do arithmetic, Python will store the result as an integer only if the two numbers involved were also integer. If one was float, then the new variable will also be of float type (or class).
Hope this helps :)
The mixing rules Martijn described are a good way to think about built in numeric types. More broadly, any object can support arithmetic operators such as + and * as long as the methods are implemented. With objects a and b when you apply an arithmetic operation like a + b python will call a.__add__(b). If that function returns NotImplemented then python will fall back on b.__radd__(a). Using this approach it is possible for only one of the two objects to know how to work with the other, and choose to act like the wider type.
For example the work for 4.0 + 3 is done by the float, and 3 + 4.0 is also computed by the float. To make your own numeric type that in some sense wider than a built in you all you need to do is implement your own __add__ and __radd__ methods that returns your wider type. This relies on the builtins returning NotImplemented for an object they don't recognize.

why python int() works like this? [duplicate]

This question already has answers here:
Negative integer division surprising result
(5 answers)
Closed 8 years ago.
Just randomly tried out this:
>>> int(-1/2)
-1
>>> int(-0.5)
0
why the results are different??
Try this:
>>> -1/2
-1
>>> -0.5
-0.5
The difference is that integer division (the former) results in an integer in some versions of Python, instead of a float like the second number is. You're using int on two different numbers, so you'll get different results. If you specify floats first, you'll see the difference disappear.
>>> -1.0/2.0
-0.5
>>> int(-1.0/2.0)
0
>>> int(-0.5)
0
The difference you see is due to how rounding works in Python. The int() function truncates to zero, as noted in the docs:
If x is floating point, the conversion truncates towards zero.
On the other hand, when both operands are integers, the / acts as though a mathematical floor operation was applied:
Plain or long integer division yields an integer of the same type; the result is that of mathematical division with the ‘floor’ function applied to the result.
So, in your first case, -1 / 2 results, theoretically, in -0.5, but because both operands are ints, Python essentially floors the result, which makes it -1. int(-1) is -1, of course. In your second example, int is applied directly to -0.5, a float, and so int truncates towards 0, resulting in 0.
(This is true of Python 2.x, which I suspect you are using.)
This is a result of two things:
Python 2.x does integer division when you divide two integers;
Python uses "Floored" division for negative numbers.
Negative integer division surprising result
http://python-history.blogspot.com/2010/08/why-pythons-integer-division-floors.html
Force at least one number to float, and the results will no longer surprise you.
assert int(-1.0/2) == 0
As others have noted, in Python 3.x the default for division of integers is to promote the result to float if there would be a nonzero remainder from the division.
As TheSoundDefense mentioned, it depends upon the version. In Python 3.3.2:
>>> int(-1/2)
0
>>> int(-0.5)
0
int() command truncates towards 0, unlike floor() which rounds downwards to the next integer.
So int(-0.5) is clearly 0.
As for -1/2, actually -1/2 is equal to -1! Therefore rounding downwards to the next integer is -1. In Python 2, -a/b != -(a/b). Actually, -1/2 equals floor(-1.0 / 2.0), which is -1.

Formatting floats in a numpy array [duplicate]

This question already has answers here:
Pretty-print a NumPy array without scientific notation and with given precision
(14 answers)
Closed 3 years ago.
If I have a numpy array like this:
[2.15295647e+01, 8.12531501e+00, 3.97113829e+00, 1.00777250e+01]
how can I move the decimal point and format the numbers so I end up with a numpy array like this:
[21.53, 8.13, 3.97, 10.08]
np.around(a, decimals=2) only gives me [2.15300000e+01, 8.13000000e+00, 3.97000000e+00, 1.00800000e+01] Which I don't want and I haven't found another way to do it.
In order to make numpy display float arrays in an arbitrary format, you can define a custom function that takes a float value as its input and returns a formatted string:
In [1]: float_formatter = "{:.2f}".format
The f here means fixed-point format (not 'scientific'), and the .2 means two decimal places (you can read more about string formatting here).
Let's test it out with a float value:
In [2]: float_formatter(1.234567E3)
Out[2]: '1234.57'
To make numpy print all float arrays this way, you can pass the formatter= argument to np.set_printoptions:
In [3]: np.set_printoptions(formatter={'float_kind':float_formatter})
Now numpy will print all float arrays this way:
In [4]: np.random.randn(5) * 10
Out[4]: array([5.25, 3.91, 0.04, -1.53, 6.68]
Note that this only affects numpy arrays, not scalars:
In [5]: np.pi
Out[5]: 3.141592653589793
It also won't affect non-floats, complex floats etc - you will need to define separate formatters for other scalar types.
You should also be aware that this only affects how numpy displays float values - the actual values that will be used in computations will retain their original precision.
For example:
In [6]: a = np.array([1E-9])
In [7]: a
Out[7]: array([0.00])
In [8]: a == 0
Out[8]: array([False], dtype=bool)
numpy prints a as if it were equal to 0, but it is not - it still equals 1E-9.
If you actually want to round the values in your array in a way that affects how they will be used in calculations, you should use np.round, as others have already pointed out.
You can use round function. Here some example
numpy.round([2.15295647e+01, 8.12531501e+00, 3.97113829e+00, 1.00777250e+01],2)
array([ 21.53, 8.13, 3.97, 10.08])
IF you want change just display representation, I would not recommended to alter printing format globally, as it suggested above. I would format my output in place.
>>a=np.array([2.15295647e+01, 8.12531501e+00, 3.97113829e+00, 1.00777250e+01])
>>> print([ "{:0.2f}".format(x) for x in a ])
['21.53', '8.13', '3.97', '10.08']
You're confusing actual precision and display precision. Decimal rounding cannot be represented exactly in binary. You should try:
> np.set_printoptions(precision=2)
> np.array([5.333333])
array([ 5.33])
[ round(x,2) for x in [2.15295647e+01, 8.12531501e+00, 3.97113829e+00, 1.00777250e+01]]

How to get the range of valid Numpy data types?

I'm interested in finding for a particular Numpy type (e.g. np.int64, np.uint32, np.float32, etc.) what the range of all possible valid values is (e.g. np.int32 can store numbers up to 2**31-1). Of course, I guess one can theoretically figure this out for each type, but is there a way to do this at run time to ensure more portable code?
Quoting from a numpy discussion list:
That information is available via numpy.finfo() and numpy.iinfo():
In [12]: finfo('d').max
Out[12]: 1.7976931348623157e+308
In [13]: iinfo('i').max
Out[13]: 2147483647
In [14]: iinfo('uint8').max
Out[14]: 255
Link here.
You can use numpy.iinfo(arg).max to find the max value for integer types of arg, and numpy.finfo(arg).max to find the max value for float types of arg.
>>> numpy.iinfo(numpy.uint64).min
0
>>> numpy.iinfo(numpy.uint64).max
18446744073709551615L
>>> numpy.finfo(numpy.float64).max
1.7976931348623157e+308
>>> numpy.finfo(numpy.float64).min
-1.7976931348623157e+308
iinfo only offers min and max, but finfo also offers useful values such as eps (the smallest number > 0 representable) and resolution (the approximate decimal number resolution of the type of arg).

Categories

Resources