NumPy: 1D interpolation of a 3D array - python

I'm rather new to NumPy. Anyone have an idea for making this code, especially the nested loops, more compact/efficient? BTW, dist and data are three-dimensional numpy arrays.
def interpolate_to_distance(self,distance):
interpolated_data=np.ndarray(self.dist.shape[1:])
for j in range(interpolated_data.shape[1]):
for i in range(interpolated_data.shape[0]):
interpolated_data[i,j]=np.interp(
distance,self.dist[:,i,j],self.data[:,i,j])
return(interpolated_data)
Thanks!

Alright, I'll take a swag with this:
def interpolate_to_distance(self, distance):
dshape = self.dist.shape
dist = self.dist.T.reshape(-1, dshape[-1])
data = self.data.T.reshape(-1, dshape[-1])
intdata = np.array([np.interp(distance, di, da)
for di, da in zip(dist, data)])
return intdata.reshape(dshape[0:2]).T
It at least removes one loop (and those nested indices), but it's not much faster than the original, ~20% faster according to %timeit in IPython. On the other hand, there's a lot of (probably unnecessary, ultimately) transposing and reshaping going on.
For the record, I wrapped it up in a dummy class and filled some 3 x 3 x 3 arrays with random numbers to test:
import numpy as np
class TestClass(object):
def interpolate_to_distance(self, distance):
dshape = self.dist.shape
dist = self.dist.T.reshape(-1, dshape[-1])
data = self.data.T.reshape(-1, dshape[-1])
intdata = np.array([np.interp(distance, di, da)
for di, da in zip(dist, data)])
return intdata.reshape(dshape[0:2]).T
def interpolate_to_distance_old(self, distance):
interpolated_data=np.ndarray(self.dist.shape[1:])
for j in range(interpolated_data.shape[1]):
for i in range(interpolated_data.shape[0]):
interpolated_data[i,j]=np.interp(
distance,self.dist[:,i,j],self.data[:,i,j])
return(interpolated_data)
if __name__ == '__main__':
testobj = TestClass()
testobj.dist = np.random.randn(3, 3, 3)
testobj.data = np.random.randn(3, 3, 3)
distance = 0
print 'Old:\n', testobj.interpolate_to_distance_old(distance)
print 'New:\n', testobj.interpolate_to_distance(distance)
Which prints (for my particular set of randoms):
Old:
[[-0.59557042 -0.42706077 0.94629049]
[ 0.55509032 -0.67808257 -0.74214045]
[ 1.03779189 -1.17605275 0.00317679]]
New:
[[-0.59557042 -0.42706077 0.94629049]
[ 0.55509032 -0.67808257 -0.74214045]
[ 1.03779189 -1.17605275 0.00317679]]
I also tried np.vectorize(np.interp) but couldn't get that to work. I suspect that would be much faster if it did work.
I couldn't get np.fromfunction to work either, as it passed (2) 3 x 3 (in this case) arrays of indices to np.interp, the same arrays you get from np.mgrid.
One other note: according the the docs for np.interp,
np.interp does not check that the x-coordinate sequence xp is increasing. If
xp is not increasing, the results are nonsense. A simple check for
increasingness is::
np.all(np.diff(xp) > 0)
Obviously, my random numbers violate the 'always increasing' rule, but you'll have to be more careful.

Related

How to ignore implicit zeros using scipy.sparse.csr_matrix.minimum?

I have two matrices mat1 and mat2 that are sparse (most entries are zero) and I'm not interested in the zero-valued entries: I look at the matrices from a graph-theoretical perspective where a zero means that there is no edge between the nodes.
How can I efficiently get the minimum values between non-zero entries only using scipy.sparse matrices?
I.e. an equivalent of mat1.minimum(mat2) that would ignore implicit zeros.
Using dense matrices, it is fairly easy to do:
import numpy as np
nnz = np.where(np.multiply(mat1, mat2))
m = mat1 + mat2
m[nnz] = np.minimum(mat1[nnz], mat2[nnz])
But this would be very inefficient with sparse matrices.
NB: a similar question has been asked before but did not get any relevant answer and there is a related PR on the scipy repo that proposes an implementation of this for (arg)min/max but not for minimum.
EDIT: to specify a bit more the desired behavior would be commutative, i.e. this nonzero-minimum would take all values present in only one of the two matrices and the min of the entries that are present in both matrices
Just in case someone also looks for this, my current implementation is below.
However, I'd appreciate any proposal that would either speed this up or reduce the memory footprint.
s = mat1.multiply(mat2)
s.data[:] = 1.
a1 = mat1.copy()
a1.data[:] = 1.
a1 = (a1 - s).maximum(0)
a2 = mat2.copy()
a2.data[:] = 1.
a2 = (a2 - s).maximum(0)
res = mat1.multiply(a1) + mat2.multiply(a2) + \
mat1.multiply(s).minimum(mat2.multiply(s))
If the sparse nonzeros are positive, an alternate way to use the correct UNION behavior of maximum might
be to negate and make positive.
Following your lead of mucking with data explicitly. I found
def sp_min_nz_positive(asp,bsp): # a and b scipy sparse
amax = asp.max()
bmax = bsp.max()
abmaxplus = max(amax, bmax) # + 1.0 : surprise! not needed.
# invert the direction, while remaining positive
arev = asp.copy()
arev.data[:] = abmaxplus - asp.data[:]
brev = bsp.copy()
brev.data[:] = abmaxplus - bsp.data[:]
out = arev.maximum(brev) #
# revert the direction of these positives
out.data[:] = abmaxplus - out.data[:]
return out
there may be inexactness due to roundoff
There was also a suggestion to use sparse internals. A rather generic
function is sp.find which returns the nonzero elements of anything.
So you could also try out a minimum that handles negative values too, with something like:
import scipy.sparse as sp
def sp_min_union(a, b):
assert a.shape == b.shape
assert sp.issparse(a) and sp.issparse(b)
(ra,ca,_) = sp.find(a) # over nonzeros only
(rb,cb,_) = sp.find(b) # over nonzeros only
setab = set(zip(ra,ca)).union(zip(rb,cb)) # row-column union-of-nonzero
r=[]
c=[]
v=[]
for (rr,cc) in setab:
r.append(rr)
c.append(cc)
anz = a[rr,cc]
bnz = b[rr,cc]
assert anz!=0 or bnz!=0 # they came from *some* sp.find
if anz==0: anz = bnz
#else:
# #if bnz==0: anz = anz
# #else: anz=min(anz,bnz)
# equiv.
elif bnz!=0: anz=min(anz,bnz)
v.append(anz)
# choose what sparse output format you want, many seem
# constructible as:
return sp.csr_matrix((v, (r,c)), shape=a.shape)

Optimize code for step function using only NumPy

I'm trying to optimize the function 'pw' in the following code using only NumPy functions (or perhaps list comprehensions).
from time import time
import numpy as np
def pw(x, udata):
"""
Creates the step function
| 1, if d0 <= x < d1
| 2, if d1 <= x < d2
pw(x,data) = ...
| N, if d(N-1) <= x < dN
| 0, otherwise
where di is the ith element in data.
INPUT: x -- interval which the step function is defined over
data -- an ordered set of data (without repetitions)
OUTPUT: pw_func -- an array of size x.shape[0]
"""
vals = np.arange(1,udata.shape[0]+1).reshape(udata.shape[0],1)
pw_func = np.sum(np.where(np.greater_equal(x,udata)*np.less(x,np.roll(udata,-1)),vals,0),axis=0)
return pw_func
N = 50000
x = np.linspace(0,10,N)
data = [1,3,4,5,5,7]
udata = np.unique(data)
ti = time()
pw(x,udata)
tf = time()
print(tf - ti)
import cProfile
cProfile.run('pw(x,udata)')
The cProfile.run is telling me that most of the overhead is coming from np.where (about 1 ms) but I'd like to create faster code if possible. It seems that performing the operations row-wise versus column-wise makes some difference, unless I'm mistaken, but I think I've accounted for it. I know that sometimes list comprehensions can be faster but I couldn't figure out a faster way than what I'm doing using it.
Searchsorted seems to yield better performance but that 1 ms still remains on my computer:
(modified)
def pw(xx, uu):
"""
Creates the step function
| 1, if d0 <= x < d1
| 2, if d1 <= x < d2
pw(x,data) = ...
| N, if d(N-1) <= x < dN
| 0, otherwise
where di is the ith element in data.
INPUT: x -- interval which the step function is defined over
data -- an ordered set of data (without repetitions)
OUTPUT: pw_func -- an array of size x.shape[0]
"""
inds = np.searchsorted(uu, xx, side='right')
vals = np.arange(1,uu.shape[0]+1)
pw_func = vals[inds[inds != uu.shape[0]]]
num_mins = np.sum(xx < np.min(uu))
num_maxs = np.sum(xx > np.max(uu))
pw_func = np.concatenate((np.zeros(num_mins), pw_func, np.zeros(xx.shape[0]-pw_func.shape[0]-num_mins)))
return pw_func
This answer using piecewise seems pretty close, but that's on a scalar x0 and x1. How would I do it on arrays? And would it be more efficient?
Understandably, x may be pretty big but I'm trying to put it through a stress test.
I am still learning though so some hints or tricks that can help me out would be great.
EDIT
There seems to be a mistake in the second function since the resulting array from the second function doesn't match the first one (which I'm confident that it works):
N1 = pw1(x,udata.reshape(udata.shape[0],1)).shape[0]
N2 = np.sum(pw1(x,udata.reshape(udata.shape[0],1)) == pw2(x,udata))
print(N1 - N2)
yields
15000
data points that are not the same. So it seems that I don't know how to use 'searchsorted'.
EDIT 2
Actually I fixed it:
pw_func = vals[inds[inds != uu.shape[0]]]
was changed to
pw_func = vals[inds[inds[(inds != uu.shape[0])*(inds != 0)]-1]]
so at least the resulting arrays match. But the question still remains on whether there's a more efficient way of going about doing this.
EDIT 3
Thanks Tin Lai for pointing out the mistake. This one should work
pw_func = vals[inds[(inds != uu.shape[0])*(inds != 0)]-1]
Maybe a more readable way of presenting it would be
non_endpts = (inds != uu.shape[0])*(inds != 0) # only consider the points in between the min/max data values
shift_inds = inds[non_endpts]-1 # searchsorted side='right' includes the left end point and not right end point so a shift is needed
pw_func = vals[shift_inds]
I think I got lost in all those brackets! I guess that's the importance of readability.
A very abstract yet interesting problem! Thanks for entertaining me, I had fun :)
p.s. I'm not sure about your pw2 I wasn't able to get it output the same as pw1.
For reference the original pws:
def pw1(x, udata):
vals = np.arange(1,udata.shape[0]+1).reshape(udata.shape[0],1)
pw_func = np.sum(np.where(np.greater_equal(x,udata)*np.less(x,np.roll(udata,-1)),vals,0),axis=0)
return pw_func
def pw2(xx, uu):
inds = np.searchsorted(uu, xx, side='right')
vals = np.arange(1,uu.shape[0]+1)
pw_func = vals[inds[inds[(inds != uu.shape[0])*(inds != 0)]-1]]
num_mins = np.sum(xx < np.min(uu))
num_maxs = np.sum(xx > np.max(uu))
pw_func = np.concatenate((np.zeros(num_mins), pw_func, np.zeros(xx.shape[0]-pw_func.shape[0]-num_mins)))
return pw_func
My first attempt was utilising a lot of boardcasting operation from numpy:
def pw3(x, udata):
# the None slice is to create new axis
step_bool = x >= udata[None,:].T
# we exploit the fact that bools are integer value of 1s
# skipping the last value in "data"
step_vals = np.sum(step_bool[:-1], axis=0)
# for the step_bool that we skipped from previous step (last index)
# we set it to zerp so that we can negate the step_vals once we reached
# the last value in "data"
step_vals[step_bool[-1]] = 0
return step_vals
After looking at the searchsorted from your pw2 I had a new approach that utilise it with much higher performance:
def pw4(x, udata):
inds = np.searchsorted(udata, x, side='right')
# fix-ups the last data if x is already out of range of data[-1]
if x[-1] > udata[-1]:
inds[inds == inds[-1]] = 0
return inds
Plots with:
plt.plot(pw1(x,udata.reshape(udata.shape[0],1)), label='pw1')
plt.plot(pw2(x,udata), label='pw2')
plt.plot(pw3(x,udata), label='pw3')
plt.plot(pw4(x,udata), label='pw4')
with data = [1,3,4,5,5,7]:
with data = [1,3,4,5,5,7,11]
pw1,pw3,pw4 are all identical
print(np.all(pw1(x,udata.reshape(udata.shape[0],1)) == pw3(x,udata)))
>>> True
print(np.all(pw1(x,udata.reshape(udata.shape[0],1)) == pw4(x,udata)))
>>> True
Performance: (timeit by default runs 3 times, average of number=N of times)
print(timeit.Timer('pw1(x,udata.reshape(udata.shape[0],1))', "from __main__ import pw1, x, udata").repeat(number=1000))
>>> [3.1938983199979702, 1.6096494779994828, 1.962694135003403]
print(timeit.Timer('pw2(x,udata)', "from __main__ import pw2, x, udata").repeat(number=1000))
>>> [0.6884554479984217, 0.6075002400029916, 0.7799002879983163]
print(timeit.Timer('pw3(x,udata)', "from __main__ import pw3, x, udata").repeat(number=1000))
>>> [0.7369808239964186, 0.7557657590004965, 0.8088172269999632]
print(timeit.Timer('pw4(x,udata)', "from __main__ import pw4, x, udata").repeat(number=1000))
>>> [0.20514375300263055, 0.20203858999957447, 0.19906871100101853]

Apply function to every matrix of numpy array

I would like to apply a function to each of the 3x3 matrices in my (6890,6890,3,3) numpy array. Until now, I have tried using vectorization on a smaller example and with a simpler function which didn't work out.
def myfunc(x):
return np.linalg.norm(x)
m = np.arange(45).reshape(5,3,3)
t = m.shape[0]
r = np.zeros((t, t))
q = m[:,None,...] # m.swapaxes(1,2) # m[i] # m[j].T
f = np.vectorize(q, otypes=[np.float])
res = myfunc(f)
Is vectorization even the right approach to solve this problem efficiently or should I try something else? I've also looked into numpy.apply_along_axis but this only applies to 1D-subarrays.
You need loop over each element and apply function:
import numpy as np
# setup function
def myfunc(x):
return np.linalg.norm(x*2)
# setup data array
data = np.arange(45).reshape(5, 3, 3)
# loop over elements and update
for item in np.nditer(data, op_flags = ['readwrite']):
item[...] = myfunc(item)
If you need apply function for entire 3x3 array then use:
out_data = []
for item in data:
out_data.append(myfunc(item))
Output:
[14.2828568570857, 39.761790704142086, 66.4529909033446, 93.32202312423365, 120.24974012445931]

Column_stacking arbitrary amounts of data in Python

I have a list of "spectral" objects that have "ydata" attributes and I need to column stack all the ydata together.
I can iterate through all the objects but I have to somehow create an array of the same length so that I can stack.
Here is a barebones version of what I have:
import numpy as np
class Spectrum(object):
def __init__(self, ydata):
self.ydata = ydata
spec = {}
spec[1] = Spectrum([1,2,3])
spec[2] = Spectrum([4,5,6])
array = np.empty(len(spec[1].ydata))
for i in range(1,len(spec)+1):
array = np.column_stack((array,spec[i].ydata))
print(array)
So the above works, but the first column of array is always the empty (random) values.
I know there has to be an easy way to do this but I am just missing it.
One option that I thought of is to start with:
array = spec[1].ydata
then move into the for-loop but that doesn't seem right since that assumes there is a spec[1].
The desired output would be:
>>>array
>>>[[1 4]
[2 5]
[3 6]]
Assuming that all your instances of Spectrum have the same ydata-length, I would go with a simple list comprehension:
array = np.array([spec[i+1].ydata for i in range(len(spec))])
print(array)
output:
[[1 2 3]
[4 5 6]]
EDIT:
I took another look at the desired output, in that case it would be
array = np.array([spec[i+1].ydata for i in range(len(spec))]).T
and
[[1 4]
[2 5]
[3 6]]
EDIT:
I wrote a small test program to compare the performances of np.array().T and np.column_stack() against each other:
import numpy as np
from timeit import Timer
class Spectrum(object):
def __init__(self, ydata):
self.ydata = ydata
def create_by_array():
return np.array([spec[i+1].ydata for i in range(len(spec))]).T
def create_by_column_stack():
return np.column_stack([spec[i+1].ydata for i in range(len(spec))])
I = 1000
spec = {i: Spectrum([j for j in range(3*i,3*(i+1))]) for i in range(1,I+1)}
t1 = Timer(
"""create_by_array()""",
setup="""from __main__ import create_by_array"""
)
res1 = t1.repeat(10,1000)
t2 = Timer(
"""create_by_column_stack()""",
setup="""from __main__ import create_by_column_stack"""
)
res2 = t2.repeat(10,1000)
print(
'Results of the two tests: ',
'{:5}, {:5}, {:5}'.format('min','mean','max')
)
print(
'With np.array and transpose:',
'{:5.3}, {:5.3}, {:5.3}'.format(np.min(res1), np.mean(res1),np.max(res1))
)
print(
'With np.column_stack(): ',
'{:5.4}, {:5.4}, {:5.4}'.format(np.min(res2), np.mean(res2),np.max(res2))
)
The program first produces a dict of 1000 Spectrum instances and then times the two methods 10 x 1000 times. Here are the results:
('Results of the two tests: ', 'min , mean , max ')
('With np.array and transpose:', '0.687, 0.709, 0.742')
('With np.column_stack(): ', '3.982, 4.367, 5.263')
As you can see, np.array().T method is about 5 times faster than the np.column_stack(). I'm not entirely sure why that is, but according to the numpy column_stack documentation page,
Take a sequence of 1-D arrays and stack them as columns to make a
single 2-D array. 2-D arrays are stacked as-is, just like with hstack.
1-D arrays are turned into 2-D columns first.
This sounds a lot like every individual sub-list is first turned into an ndarray, while np.array() only creates the final array. The transposing of the matrix is very fast, as it does not do any re-arranging in memory. See for instance here. I hope this clears it up.

Rounding a list of values to the nearest value from another list in python

Suppose I have the following two arrays:
>>> a = np.random.normal(size=(5,))
>>> a
array([ 1.42185826, 1.85726088, -0.18968258, 0.55150255, -1.04356681])
>>> b = np.random.normal(size=(10,10))
>>> b
array([[ 0.64207828, -1.08930317, 0.22795289, 0.13990505, -0.9936441 ,
1.07150754, 0.1701072 , 0.83970818, -0.63938211, -0.76914925],
[ 0.07776129, -0.37606964, -0.54082077, 0.33910246, 0.79950839,
0.33353221, 0.00967273, 0.62224009, -0.2007335 , -0.3458876 ],
[ 2.08751603, -0.52128218, 1.54390634, 0.96715102, 0.799938 ,
0.03702108, 0.36095493, -0.13004965, -1.12163463, 0.32031951],
[-2.34856521, 0.11583369, -0.0056261 , 0.80155082, 0.33421475,
-1.23644508, -1.49667424, -1.01799365, -0.58232326, 0.404464 ],
[-0.6289335 , 0.63654201, -1.28064055, -1.01977467, 0.86871352,
0.84909353, 0.33036771, 0.2604609 , -0.21102014, 0.78748329],
[ 1.44763687, 0.84205291, 0.76841512, 1.05214051, 2.11847126,
-0.7389102 , 0.74964783, -1.78074088, -0.57582084, -0.67956203],
[-1.00599479, -0.93125754, 1.43709533, 1.39308038, 1.62793589,
-0.2744919 , -0.52720952, -0.40644809, 0.14809867, -1.49267633],
[-1.8240385 , -0.5416585 , 1.10750423, 0.56598464, 0.73927224,
-0.54362927, 0.84243497, -0.56753587, 0.70591902, -0.26271302],
[-1.19179547, -1.38993415, -1.99469983, -1.09749452, 1.28697997,
-0.74650318, 1.76384156, 0.33938808, 0.61647274, -0.42166111],
[-0.14147554, -0.96192206, 0.14434349, 1.28437894, -0.38865447,
-1.42540195, 0.93105528, 0.28993325, -1.16119916, -0.58244758]])
I have to find a way to round all values from b to the nearest value found in a.
Does anyone know of a good way to do this with python? I am at a total loss myself.
Here is something you can try
import numpy as np
def rounder(values):
def f(x):
idx = np.argmin(np.abs(values - x))
return values[idx]
return np.frompyfunc(f, 1, 1)
a = np.random.normal(size=(5,))
b = np.random.normal(size=(10,10))
rounded = rounder(a)(b)
print(rounded)
The rounder function takes the values which we want to round to. It creates a function which takes a scalar and returns the closest element from the values array. We then transform this function to a broadcast-able function using numpy.frompyfunc. This way you are not limited to using this on 2d arrays, numpy automatically does broadcasting for you without any loops.
If you sort a you can use bisect to find the index in array a where each element from the sub arrays of array b would land:
import numpy as np
from bisect import bisect
a = np.random.normal(size=(5,))
b = np.random.normal(size=(10, 10))
a.sort()
size = a.size
for sub in b:
for ind2, ele in enumerate(sub):
i = bisect(a, ele, hi=size-1)
i1, i2 = a[i], a[i-1]
sub[ind2] = i1 if abs(i1 - ele) < abs(i2 - ele) else i2
Assuming a will always be 1 dimensional, and that b can have any dimension in this solution.
Create two temporary arrays tiling a and b into the dimensions of the other (here both will now have a shape of (5,10,10)).
at = np.tile(np.reshape(a, (-1, *list(np.ones(len(b.shape)).astype(int)))), (1, *b.shape))
bt = np.tile(b, (a.size, *list(np.ones(len(b.shape)).astype(int))))
For the nearest operation, you can take the absolute value of the difference between the two. The minimum value of that operation in the first dimension (dimension 0) gives the index in the a array.
idx = np.argmin(np.abs(at-bt),axis=0)
All that is left is to select the values from array a using the index, which will return an array in the shape of b with the nearest values from a.
ans = a[idx]
This method can also be used (modifying how the index is calculated) to do other operations, such as a floor, ceil, etc.
Note that this solution can be memory intensive, which is not much of an issue with small arrays. A looping solution could be less memory intensive at the cost of speed.
I don't know Numpy, but I don't think knowledge of Numpy is needed to be able to answer this question. Assuming that an array can be iterated and modified in the same way as a list, the following code solves your problem by using a nested loop to find the closest value.
for i in range(len(b)):
for k in range(len(b[i])):
closest = a[0]
for j in range(1, len(a)):
if abs(a[j] - b[i][k]) < abs(closest - b[i][k]):
closest = a[j]
b[i][k] = closest
Disclaimer: a more pythonic approach may exist.

Categories

Resources