Why NumPy is casting objects to floats? - python

I'm trying to store intervals (with its specific arithmetic) in NumPy arrays. If I use my own Interval class, it works, but my class is very poor and my Python knowledge limited.
I know pyInterval and it's very complete. It covers my problems. The only thing which is not working is storing pyInterval objects in NumPy arrays.
class Interval(object):
def __init__(self, lower, upper = None):
if upper is None:
self.upper = self.lower = lower
elif lower <= upper:
self.lower = lower
self.upper = upper
else:
raise ValueError(f"Lower is bigger than upper! {lower},{upper}")
def __repr__(self):
return "Interval " + str((self.lower,self.upper))
def __mul__(self,another):
values = (self.lower * another.lower,
self.upper * another.upper,
self.lower * another.upper,
self.upper * another.lower)
return Interval(min(values), max(values))
import numpy as np
from interval import interval
i = np.array([Interval(2,3), Interval(-3,6)], dtype=object) # My class
ix = np.array([interval([2,3]), interval([-3,6])], dtype=object) # pyInterval
These are the results
In [30]: i
Out[30]: array([Interval (2, 3), Interval (-3, 6)], dtype=object)
In [31]: ix
Out[31]:
array([[[2.0, 3.0]],
[[-3.0, 6.0]]], dtype=object)
The intervals from pyInterval has been casted as list of list of floats. It doesn't be a problem if them preserves interval arithmetics...
In [33]: i[0] * i[1]
Out[33]: Interval (-9, 18)
In [34]: ix[0] * ix[1]
Out[34]: array([[-6.0, 18.0]], dtype=object)
Out[33] is the wished output. The output using pyInterval is incorrect. Obviously using raw pyInterval it works like a charm
In [35]: interval([2,3]) * interval([-3,6])
Out[35]: interval([-9.0, 18.0])
Here is the pyInterval source code. I don't understand why using this object NumPy doesn't work as I expect.

To be fair, it is really hard for the numpy.ndarray constructor to infer what kind of data should go into it. It receives objects which resemble lists of tuples and makes do with it.
You can, however, help your constructor a bit by not having it guess the shape of your data:
a = interval([2,3])
b = interval([-3,6])
ll = [a,b]
ix = np.empty((len(ll),), dtype=object)
ix[:] = [*ll]
ix[0]*ix[1] #interval([-9.0, 18.0])

NumPy sees each interval as an array of two numbers, and it does elementwise multiplication which you don't want. Try this:
interval.__mul__(ix[0], ix[1])
That is a direct invocation of the function you want to call. It should give you the answer you need, even if it is not very pretty. To turn it into something that works on whole arrays, you can do this:
itvmul = np.vectorize(interval.__mul__)
That will allow you to do elementwise multiplication of arrays of intervals: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.vectorize.html

Related

Algorithm for tensordot implemented in numba is much slower than numpy's

I am trying to expand the numpy "tensordot" such that things like:
K_ijklm = A_ki * B_jml can be written in a clear way like this: K = mytensordot(A,B,[2,0],[1,4,3])
To my understanding, numpy's tensordot (with optional argument 0) would be able to do something like this: K_kijml = A_ki * B_jml, i.e. keeping the order of the indexes. Therefore I would then have to do a number of np.swapaxes() to obtain the matrix `K_ijklm', which in a complicated case can be an easy source of errors (potentially very hard to debug).
The problem is that my implementation is slow (10x slower than tensordot [EDIT: It is actually MUCH slower than that]), even when using numba. I was wondering if anyone would have some insight on what could be done to improve the performance of my algorithm.
MWE
import numpy as np
import numba as nb
import itertools
import timeit
#nb.jit()
def myproduct(dimN):
N=np.prod(dimN)
L=len(dimN)
Product=np.zeros((N,L),dtype=np.int32)
rn=0
for n in range(1,N):
for l in range(L):
if l==0:
rn=1
v=Product[n-1,L-1-l]+rn
rn = 0
if v == dimN[L-1-l]:
v = 0
rn = 1
Product[n,L-1-l]=v
return Product
#nb.jit()
def mytensordot(A,B,iA,iB):
iA,iB = np.array(iA,dtype=np.int32),np.array(iB,dtype=np.int32)
dimA,dimB = A.shape,B.shape
NdimA,NdimB=len(dimA),len(dimB)
if len(iA) != NdimA: raise ValueError("iA must be same size as dim A")
if len(iB) != NdimB: raise ValueError("iB must be same size as dim B")
NdimN = NdimA + NdimB
dimN=np.zeros(NdimN,dtype=np.int32)
dimN[iA]=dimA
dimN[iB]=dimB
Out=np.zeros(dimN)
indexes = myproduct(dimN)
for nidxs in indexes:
idxA = tuple(nidxs[iA])
idxB = tuple(nidxs[iB])
v=A[(idxA)]*B[(idxB)]
Out[tuple(nidxs)]=v
return Out
A=np.random.random((4,5,3))
B=np.random.random((6,4))
def runmytdot():
return mytensordot(A,B,[0,2,3],[1,4])
def runtensdot():
return np.tensordot(A,B,0).swapaxes(1,3).swapaxes(2,3)
print(np.all(runmytdot()==runtensdot()))
print(timeit.timeit(runmytdot,number=100))
print(timeit.timeit(runtensdot,number=100))
Result:
True
1.4962144780438393
0.003484356915578246
You have run into a known issue. numpy.zeros requires a tuple when creating a multidimensional array. If you pass something other than a tuple, it sometimes works, but that's only because numpy is smart about converting the object into a tuple first.
The trouble is that numba does not currently support conversion of arbitrary iterables into tuples. So this line fails when you try to compile it in nopython=True mode. (A couple of others fail too, but this is the first.)
Out=np.zeros(dimN)
In theory you could call np.prod(dimN), create a flat array of zeros, and reshape it, but then you run into the very same problem: the reshape method of numpy arrays requires a tuple!
This is quite a vexing problem with numba -- I had not encountered it before. I really doubt the solution I have found is the correct one, but it is a working solution that allows us to compile a version in nopython=True mode.
The core idea is to avoid using tuples for indexing by directly implementing an indexer that follows the strides of the array:
#nb.jit(nopython=True)
def index_arr(a, ix_arr):
strides = np.array(a.strides) / a.itemsize
ix = int((ix_arr * strides).sum())
return a.ravel()[ix]
#nb.jit(nopython=True)
def index_set_arr(a, ix_arr, val):
strides = np.array(a.strides) / a.itemsize
ix = int((ix_arr * strides).sum())
a.ravel()[ix] = val
This allows us to get and set values without needing a tuple.
We can also avoid using reshape by passing the output buffer into the jitted function, and wrapping that function in a helper:
#nb.jit() # We can't use nopython mode here...
def mytensordot(A, B, iA, iB):
iA, iB = np.array(iA, dtype=np.int32), np.array(iB, dtype=np.int32)
dimA, dimB = A.shape, B.shape
NdimA, NdimB = len(dimA), len(dimB)
if len(iA) != NdimA:
raise ValueError("iA must be same size as dim A")
if len(iB) != NdimB:
raise ValueError("iB must be same size as dim B")
NdimN = NdimA + NdimB
dimN = np.zeros(NdimN, dtype=np.int32)
dimN[iA] = dimA
dimN[iB] = dimB
Out = np.zeros(dimN)
return mytensordot_jit(A, B, iA, iB, dimN, Out)
Since the helper contains no loops, it adds some overhead, but the overhead is pretty trivial. Here's the final jitted function:
#nb.jit(nopython=True)
def mytensordot_jit(A, B, iA, iB, dimN, Out):
for i in range(np.prod(dimN)):
nidxs = int_to_idx(i, dimN)
a = index_arr(A, nidxs[iA])
b = index_arr(B, nidxs[iB])
index_set_arr(Out, nidxs, a * b)
return Out
Unfortunately, this does not wind up generating as much of a speedup as we might like. On smaller arrays it's about 5x slower than tensordot; on larger arrays it's still 50x slower. (But at least it's not 1000x slower!) This is not too surprising in retrospect, since dot and tensordot are both using BLAS under the hood, as #hpaulj reminds us.
After finishing this code, I saw that einsum has solved your real problem -- nice!
But the underlying issue that your original question points to -- that indexing with arbitrary-length tuples is not possible in jitted code -- is still a frustration. So hopefully this will be useful to someone else!
tensordot with scalar axes values can be obscure. I explored it in
How does numpy.tensordot function works step-by-step?
There I deduced that np.tensordot(A, B, axes=0) is equivalent using axes=[[], []].
In [757]: A=np.random.random((4,5,3))
...: B=np.random.random((6,4))
In [758]: np.tensordot(A,B,0).shape
Out[758]: (4, 5, 3, 6, 4)
In [759]: np.tensordot(A,B,[[],[]]).shape
Out[759]: (4, 5, 3, 6, 4)
That in turn is equivalent to calling dot with a new size 1 sum-of-products dimenson:
In [762]: np.dot(A[...,None],B[...,None,:]).shape
Out[762]: (4, 5, 3, 6, 4)
(4,5,3,1) * (6,1,4) # the 1 is the last of A and 2nd to the last of B
dot is fast, using BLAS (or equivalent) code. Swapping axes and reshaping is also relatively fast.
einsum gives us a lot of control over axes
replicating the above products:
In [768]: np.einsum('jml,ki->jmlki',A,B).shape
Out[768]: (4, 5, 3, 6, 4)
and with swapping:
In [769]: np.einsum('jml,ki->ijklm',A,B).shape
Out[769]: (4, 4, 6, 3, 5)
A minor point - the double swap can be written as one transpose:
.swapaxes(1,3).swapaxes(2,3)
.transpose(0,3,1,2,4)

How to find negative imaginary parts of values in an array then turning them to positive?

I have a function a=x*V where x assumes thousands of values as x = arange(1,1000,0.1) and V is a combination of other constants. These make a always complex (has nonzero real and imaginary parts). However, because a depends on other values, the imag(a) can be negative for some x's.
For what I am doing, however, I need imag(a) to be always positive, so I need to take the negative values and turn them into positive.
I have tried doing
if imag(a)<0:
imag(a) = -1*imag(a)
That didn't seem to work because it gives me the error: SyntaxError: Can't assign to function call. I thought it was because it's an array so I tried any() and all(), but that didn't work either.
I'm out of options now.
IIUC:
In [35]: a = np.array([1+1j, 2-2j, 3+3j, 4-4j])
In [36]: a.imag *= np.where(a.imag < 0, -1, 1)
In [37]: a
Out[37]: array([ 1.+1.j, 2.+2.j, 3.+3.j, 4.+4.j])
You can't redefine a function that way. It would be like saying
sqrt(x) = 2*sqrt(x)
What you can do is reassign the value of a (not imag(a)).
if imag(a) < 0
a = a - 2*imag(a)*j
For example, if a = 3 - 5j, then it would give you
3 - 5j - 2(-5)j = 3 + 5j
It appears to be faster than doing subtraction. For a full function:
import numpy as np
def imag_abs(x):
mask = x.imag < 0
x[mask] = np.conj(x[mask])
return x

Reduce python loop to array calculation

I am trying to fill an array with calculated values from functions defined earlier in my code. I started with a code that has a similar structure to the following:
from numpy import cos, sin, arange, zeros
a = arange(1000)
b = arange(1000)
def defcos(x):
return cos(x)
def defsin(x):
return sin(x)
a_len = len(a)
b_len = len(b)
result = zeros((a_len,b_len))
for i in xrange(b_len):
for j in xrange(a_len):
a_res = defcos(a[j])
b_res = defsin(b[i])
result[i,j] = a_res * b_res
I tried to use array representations of the functions, which ended up in the following change for the loop
a_res = defsin(a)
b_res = defcos(b)
for i in xrange(b_len):
for j in xrange(a_len):
result[i,j] = a_res[i] * b_res[j]
This is already significantly faster, than the first version. But is there a way to avoid the loop entirely? I have encountered those loops a couple of times in the past but never botheres as it was not critical in terms of speed. But this time it is the core component of something, which is looped through a couple of times more. :)
Any help would be appreciated, thanks in advance!
Like so:
from numpy import newaxis
a_res = sin(a)
b_res = cos(b)
result = a_res[:, newaxis] * b_res
To understand how this works, have a look at the rules for array broadcasting. And please don't define useless functions like defsin, just use sin itself! Another minor detail, you get i from range(b_len), but you use it to index a_res! This is a bug if a_len != b_len.

numpy array to list conversion issue

For some reason, evalRow(list(array([0, 1, 0, 0, 0]))) and evalRow([0, 1, 0, 0, 0]) give different results. However if I use magicConvert (here to debug this) instead of list to go from numpy array to list it works as expected:
def magicConvert(a):
ss = str(list(a))[1:-1]
return map(int, ss.split(","))
# You don't actually need to read these functions, just here to reproduce the error:
from itertools import *
def evalRow(r):
grouped = map(
lambda (v, l): (v, len(tuple(l))),
groupby(chain([2], r, [2])))
result = 0
for player in (1, -1):
for (pre, mid, post) in allTuples(grouped, 3):
if mid[0] == player:
result += player * streakScore(mid[1], (pre[0] == 0) + (post[0] == 0))
return result
def streakScore(size, blanks):
return 0 if blanks == 0 else (
100 ** (size - 1) * (1 if blanks == 1 else 10))
def allTuples(l, size):
return map(lambda i: l[i : i + size], xrange(len(l) - size + 1))
The difference in the behaviour is due to the fact that doing list(some_array) returns a list of numpy.int64, while, doing the conversion via the string representation (or equivalently using the tolist() method) returns a list of python's ints:
In [21]: import numpy as np
In [22]: ar = np.array([1,2,3])
In [23]: list(ar)
Out[23]: [1, 2, 3]
In [24]: type(list(ar)[0])
Out[24]: numpy.int64
In [25]: type(ar.tolist()[0])
Out[25]: builtins.int
I believe the culprit is the 100 ** (size - 1) part of your code:
In [26]: 100 ** (np.int64(50) - 1)
Out[26]: 0
In [27]: 100 ** (50 - 1)
Out[27]: 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
In [28]: type(100 ** (np.int64(50) - 1))
Out[28]: numpy.int64
What you see is the int64 overflowing, hence the result of the exponentiation are essentially "random", while python's ints have unlimited range and give the correct result.
To summary:
If you want to convert between numpy and python data types use the proper methods, in this case array.tolist()
Remember that numpys data types have limited range, hence you should check for overflows and expect strange results in other situations. If you do not use the proper methods for conversion you might end up using numpy data types when you didn't expect (as in this case).
Never assume it's a bug in python/numpy/a very widely used library. The chances to find a bug in such trivial cases in such well-tested and widely used softwares is really small. If the program gives unexpected results, 99.999% of the times it's because you are doing something wrong. So, before blaming on others try to check step by step what your program is doing.
I tested it and it gave me differnet results. Don't ask me why, maybe a bug?
Anyway always use the tolist() function to convert a numpy array to a list.
evalRow(array([0, 1, 0, 0, 0]).tolist()) == evalRow([0, 1, 0, 0, 0])
#output: True

Find float in ndarray

I tried to find a float number in ndarray. Due to the software package I am using (Abaqus), the precision it outputs is a little bit low. For example, 10 is something like 10.00003. Therefore, I was wondering whether there is a "correct" way to do it, that is neater than my code.
Example code:
import numpy as np
array = np.arange(10)
number = 5.00001
if I do this:
idx = np.where(number==array)[0][0]
Then the result is empty because 5.00001 does not equal to 5.
Now I am doing:
atol = 1e-3 # Absolute tolerance
idx = np.where(abs(number-array) < atol)[0][0]
which works, and is not too messy... Yet I was wondering there would be a neater way to do it. Thanks!
PS: numpy.allclose() is another way to do it, but I need to use number * np.ones([array.shape[0], array.shape[1]]) and it still seems verbose to me...
Edit: Thank you all so much for the fantastic answers! np.isclose() is the exact function that I am looking for, and I missed it since it is not in the doc... I wouldn't have realized this until they update the doc, if it weren't you guys. Thank you again!
PS: numpy.allclose() is another way to do it, but I need to use number * np.ones([array.shape[0], array.shape[1]]) and it still seems verbose to me...
You almost never need to do anything like number * np.ones([array.shape[0], array.shape[1]]). Just as you can multiply that scalar number by that ones array to multiply all of its 1 values by number, you can pass that scalar number to allclose to compare all of the original array's values to number. For example:
>>> a = np.array([[2.000000000001, 2.0000000002], [2.000000000001, 1.999999999]])
>>> np.allclose(a, 2)
True
As a side note, if you really do need an array of all 2s, there's an easier way to do it than multiplying 2 by ones:
>>> np.tile(2, array.shape)
array([[2, 2], [2, 2]])
For that matter, I don't know why you need to do [array.shape[0], array.shape[1]]. If the array is 2D, that's exactly the same thing as array.shape. If the array might be larger, it's exactly the same as array.shape[:2].
I'm not sure this solves your actual problem, because it seems like you want to know which ones are close and not close, rather than just whether or not they all are. But the fact that you said you could use allclose if not for the fact that it's too verbose to create the array to compare with.
So, if you need whereclose rather than allclose… well, there's no such function. But it's pretty easy to build yourself, and you can always wrap it up if you're doing it repeatedly.
If you had an isclose method—like allclose, but returning a bool array instead of a single bool—you could just write:
idx = np.where(isclose(a, b, 0, atol))[0][0]
… or, if you're doing it over and over:
def whereclose(a, b, rtol=1e-05, atol=1e-08):
return np.where(isclose(a, b, rtol, atol))
idx = whereclose(a, b, 0, atol)[0][0]
As it turns out, version 1.7 of numpy does have exactly that function (see also here), but it doesn't appear to be in the docs. If you don't want to rely on a possibly-undocumented function, or need to work with numpy 1.6, you can write it yourself trivially:
def isclose(a, b, rtol=1e-05, atol=1e-08):
return np.abs(a-b) <= (atol + rtol * np.abs(b))
If you have up-to-date numpy (1.7), then the best way is to use np.isclose which will broadcast the shapes together automatically:
import numpy as np
a = np.arange(10)
n = 5.000001
np.isclose(a, n).nonzero()
#(array([5]),)
or, if you expect only one match:
np.isclose(a, n).nonzero()[0][0]
#5
(np.nonzero is basically the same thing as np.where except that it doesn't have the if condition then/else capability)
The method you use above, specifically abs(A - B) < atol, is standard for doing floating point comparisons across many languages. Obviously when using numpy A and/or B can be arrays or numbers.
Here is another approach that might be useful to look at. I'm not sure it applies to your case, but it could be very helpful if you're looking for more than one number in the array (which is a common use case). It's inspired by this question which is kind of similar.
import numpy as np
def find_close(a, b, rtol=1e-05, atol=1e-08):
tol = atol + abs(b) * rtol
lo = b - tol
hi = b + tol
order = a.argsort()
a_sorted = a[order]
left = a_sorted.searchsorted(lo)
right = a_sorted.searchsorted(hi, 'right')
return [order[L:R] for L, R in zip(left, right)]
a = np.array([2., 3., 3., 4., 0., 1.])
b = np.array([1.01, 3.01, 100.01])
print find_close(a, b, atol=.1)
# [array([5]), array([1, 2]), array([], dtype=int64)]

Categories

Resources