linear interpolation in numpy - python

I have 2 numpy arrays
X = [[2 3 6], [7 2 9], [7 1 4]]
a = [0 0.0005413307 0.0010949014 0.0015468832 0.0027740823 0.0033288284]
b = [0 0.0050251256 0.0100502513 0.0150753769 0.0201005025 0.0251256281]
new = []
for z in range(3):
new.append(interp1d(a, z[0], b, 'linear'))
I am getting error as :
if xi is not None and shape[axis] != len(xi):
TypeError: tuple indices must be integers, not str
I need to find the linear interpolation of the same. How can I find that?
I have values X with respect to time a but I want to find interpolation for time b.
Linear interpolation will give me 3 points as in X for every a[i] and b[i] ?

You put the arguments in wrong order. Flowing is the help message of interp1d, check it out:
interp1d(x, y, kind='linear', axis=-1, copy=True, bounds_error=True,fill_value=np.nan)
Interpolate a 1-D function.
x and y are arrays of values used to approximate some function f:
y = f(x) .
This class returns a function whose call method uses interpolation
to find the value of new points.

interp1d is a function whose return value is a new function. This new function can then be called with values in the given interpolation range:
from scipy.interpolate import interp1d
x1 = [ 0., 0.04007922, 0.04723573, 0.05440107, 0.06178645, 0.06837938]
x2 = [ 0., 0.00502513, 0.01005025, 0.01507538, 0.0201005, 0.02512563]
f = interp1d(x1, x2)
f([0.0, 0.01, 0.02, 0.03, 0.068])
#array([ 0. , 0.0012538 , 0.0025076 , 0.0037614 , 0.02483647])

Related

Squared Mahalanobis distance function in Python returning array - why?

The code is:
import numpy as np
def Mahalanobis(x, covariance_matrix, mean):
x = np.array(x)
mean = np.array(mean)
covariance_matrix = np.array(covariance_matrix)
return (x-mean)*np.linalg.inv(covariance_matrix)*(x.transpose()-mean.transpose())
#variables x and mean are 1xd arrays; covariance_matrix is a dxd matrix
#the 1xd array passed to x should be multiplied by the (inverted) dxd array
#that was passed into the second argument
#the resulting 1xd matrix is to be multiplied by a dx1 matrix, the transpose of
#[x-mean], which should result in a 1x1 array (a number)
But for some reason I get a matrix for my output when I enter the parameters
Mahalanobis([2,5], [[.5,0],[0,2]], [3,6])
output:
out[]: array([[ 2. , 0. ],
[ 0. , 0.5]])
It seems my function is just giving me the inverse of the 2x2 matrix that I input in the 2nd argument.
You've made the classic mistake of assuming that the * operator is doing matrix multiplication. This is not true in Python/numpy (see http://www.scipy-lectures.org/intro/numpy/operations.html and https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html). I broke it down into intermediate steps and used the dot function
import numpy as np
def Mahalanobis(x, covariance_matrix, mean):
x = np.array(x)
mean = np.array(mean)
covariance_matrix = np.array(covariance_matrix)
t1 = (x-mean)
print(f'Term 1 {t1}')
icov = np.linalg.inv(covariance_matrix)
print(f'Inverse covariance {icov}')
t2 = (x.transpose()-mean.transpose())
print(f'Term 2 {t2}')
mahal = t1.dot(icov.dot(t2))
#return (x-mean)*np.linalg.inv(covariance_matrix).dot(x.transpose()-mean.transpose())
return mahal
#variables x and mean are 1xd arrays; covariance_matrix is a dxd matrix
#the 1xd array passed to x should be multiplied by the (inverted) dxd array
#that was passed into the second argument
#the resulting 1xd matrix is to be multiplied by a dx1 matrix, the transpose of
#[x-mean], which should result in a 1x1 array (a number)
Mahalanobis([2,5], [[.5,0],[0,2]], [3,6])
produces
Term 1 [-1 -1]
Inverse covariance [[2. 0. ]
[0. 0.5]]
Term 2 [-1 -1]
Out[9]: 2.5
One can use scipy's mahalanobis() function to verify:
import scipy.spatial, numpy as np
scipy.spatial.distance.mahalanobis([2,5], [3,6], np.linalg.inv([[.5,0],[0,2]]))
# 1.5811388300841898
1.5811388300841898**2 # squared Mahalanobis distance
# 2.5000000000000004
def Mahalanobis(x, covariance_matrix, mean):
x, m, C = np.array(x), np.array(mean), np.array(covariance_matrix)
return (x-m)#np.linalg.inv(C)#(x-m).T
Mahalanobis([2,5], [[.5,0],[0,2]], [3,6])
# 2.5
np.isclose(
scipy.spatial.distance.mahalanobis([2,5], [3,6], np.linalg.inv([[.5,0],[0,2]]))**2,
Mahalanobis([2,5], [[.5,0],[0,2]], [3,6])
)
# True

Scipy library similarity score calculations

I'm trying to compute similarity scores using vectors:
from scipy.spatial import distance
x = [1,2,4]
y = [1,3,5]
d = distance.cdist(x, y, 'seuclidean', V=None)
However, I keep getting this error:
ValueError: XA must be a 2-dimensional array.
First off you need to use NumPy arrays for input arrays and the error is telling you they need to be 2-D (as column vectors in this case). So:
from scipy.spatial import distance
import numpy as np
x = [1,2,4]
y = [1,3,5]
x = np.array(x).reshape(-1, 1)
y = np.array(y).reshape(-1, 1)
x
array([[1],
[2],
[4]])
y
array([[1],
[3],
[5]])
d = distance.cdist(x, y, 'seuclidean', V=None)
d
array([[ 0. , 1.22474487, 2.44948974],
[ 0.61237244, 0.61237244, 1.83711731],
[ 1.83711731, 0.61237244, 0.61237244]])
There are many method in the distance module that do calculate a similarity (distance) between two vectors. A common example is cosine.
x = [1,2,4]
y = [1,3,5]
distance.cosine(x, y)
0.0040899966895213691

Partial convolution / correlation with numpy [duplicate]

I am learning numpy/scipy, coming from a MATLAB background. The xcorr function in Matlab has an optional argument "maxlag" that limits the lag range from –maxlag to maxlag. This is very useful if you are looking at the cross-correlation between two very long time series but are only interested in the correlation within a certain time range. The performance increases are enormous considering that cross-correlation is incredibly expensive to compute.
In numpy/scipy it seems there are several options for computing cross-correlation. numpy.correlate, numpy.convolve, scipy.signal.fftconvolve. If someone wishes to explain the difference between these, I'd be happy to hear, but mainly what is troubling me is that none of them have a maxlag feature. This means that even if I only want to see correlations between two time series with lags between -100 and +100 ms, for example, it will still calculate the correlation for every lag between -20000 and +20000 ms (which is the length of the time series). This gives a 200x performance hit! Do I have to recode the cross-correlation function by hand to include this feature?
Here are a couple functions to compute auto- and cross-correlation with limited lags. The order of multiplication (and conjugation, in the complex case) was chosen to match the corresponding behavior of numpy.correlate.
import numpy as np
from numpy.lib.stride_tricks import as_strided
def _check_arg(x, xname):
x = np.asarray(x)
if x.ndim != 1:
raise ValueError('%s must be one-dimensional.' % xname)
return x
def autocorrelation(x, maxlag):
"""
Autocorrelation with a maximum number of lags.
`x` must be a one-dimensional numpy array.
This computes the same result as
numpy.correlate(x, x, mode='full')[len(x)-1:len(x)+maxlag]
The return value has length maxlag + 1.
"""
x = _check_arg(x, 'x')
p = np.pad(x.conj(), maxlag, mode='constant')
T = as_strided(p[maxlag:], shape=(maxlag+1, len(x) + maxlag),
strides=(-p.strides[0], p.strides[0]))
return T.dot(p[maxlag:].conj())
def crosscorrelation(x, y, maxlag):
"""
Cross correlation with a maximum number of lags.
`x` and `y` must be one-dimensional numpy arrays with the same length.
This computes the same result as
numpy.correlate(x, y, mode='full')[len(a)-maxlag-1:len(a)+maxlag]
The return vaue has length 2*maxlag + 1.
"""
x = _check_arg(x, 'x')
y = _check_arg(y, 'y')
py = np.pad(y.conj(), 2*maxlag, mode='constant')
T = as_strided(py[2*maxlag:], shape=(2*maxlag+1, len(y) + 2*maxlag),
strides=(-py.strides[0], py.strides[0]))
px = np.pad(x, maxlag, mode='constant')
return T.dot(px)
For example,
In [367]: x = np.array([2, 1.5, 0, 0, -1, 3, 2, -0.5])
In [368]: autocorrelation(x, 3)
Out[368]: array([ 20.5, 5. , -3.5, -1. ])
In [369]: np.correlate(x, x, mode='full')[7:11]
Out[369]: array([ 20.5, 5. , -3.5, -1. ])
In [370]: y = np.arange(8)
In [371]: crosscorrelation(x, y, 3)
Out[371]: array([ 5. , 23.5, 32. , 21. , 16. , 12.5, 9. ])
In [372]: np.correlate(x, y, mode='full')[4:11]
Out[372]: array([ 5. , 23.5, 32. , 21. , 16. , 12.5, 9. ])
(It will be nice to have such a feature in numpy itself.)
Until numpy implements the maxlag argument, you can use the function ucorrelate from the pycorrelate package. ucorrelate operates on numpy arrays and has a maxlag keyword. It implements the correlation from using a for-loop and optimizes the execution speed with numba.
Example - autocorrelation with 3 time lags:
import numpy as np
import pycorrelate as pyc
x = np.array([2, 1.5, 0, 0, -1, 3, 2, -0.5])
c = pyc.ucorrelate(x, x, maxlag=3)
c
Result:
Out[1]: array([20, 5, -3])
The pycorrelate documentation contains a notebook showing perfect match between pycorrelate.ucorrelate and numpy.correlate:
matplotlib.pyplot provides matlab like syntax for computating and plotting of cross correlation , auto correlation etc.
You can use xcorr which allows to define the maxlags parameter.
import matplotlib.pyplot as plt
import numpy as np
data = np.arange(0,2*np.pi,0.01)
y1 = np.sin(data)
y2 = np.cos(data)
coeff = plt.xcorr(y1,y2,maxlags=10)
print(*coeff)
[-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7
8 9 10] [ -9.81991753e-02 -8.85505028e-02 -7.88613080e-02 -6.91325329e-02
-5.93651264e-02 -4.95600447e-02 -3.97182508e-02 -2.98407146e-02
-1.99284126e-02 -9.98232812e-03 -3.45104289e-06 9.98555430e-03
1.99417667e-02 2.98641953e-02 3.97518558e-02 4.96037706e-02
5.94189688e-02 6.91964864e-02 7.89353663e-02 8.86346584e-02
9.82934198e-02] <matplotlib.collections.LineCollection object at 0x00000000074A9E80> Line2D(_line0)
#Warren Weckesser's answer is the best as it leverages numpy to get performance savings (and not just call corr for each lag). Nonetheless, it returns the cross-product (eg the dot product between the inputs at various lags). To get the actual cross-correlation I modified his answer w/ an optional mode argument, which if set to 'corr' returns the cross-correlation as such:
def crosscorrelation(x, y, maxlag, mode='corr'):
"""
Cross correlation with a maximum number of lags.
`x` and `y` must be one-dimensional numpy arrays with the same length.
This computes the same result as
numpy.correlate(x, y, mode='full')[len(a)-maxlag-1:len(a)+maxlag]
The return vaue has length 2*maxlag + 1.
"""
py = np.pad(y.conj(), 2*maxlag, mode='constant')
T = as_strided(py[2*maxlag:], shape=(2*maxlag+1, len(y) + 2*maxlag),
strides=(-py.strides[0], py.strides[0]))
px = np.pad(x, maxlag, mode='constant')
if mode == 'dot': # get lagged dot product
return T.dot(px)
elif mode == 'corr': # gets Pearson correlation
return (T.dot(px)/px.size - (T.mean(axis=1)*px.mean())) / \
(np.std(T, axis=1) * np.std(px))
I encountered the same problem some time ago, I paid more attention to the efficiency of calculation.Refer to the source code of MATLAB's function xcorr.m, I made a simple one.
import numpy as np
from scipy import signal, fftpack
import math
import time
def nextpow2(x):
if x == 0:
y = 0
else:
y = math.ceil(math.log2(x))
return y
def xcorr(x, y, maxlag):
m = max(len(x), len(y))
mx1 = min(maxlag, m - 1)
ceilLog2 = nextpow2(2 * m - 1)
m2 = 2 ** ceilLog2
X = fftpack.fft(x, m2)
Y = fftpack.fft(y, m2)
c1 = np.real(fftpack.ifft(X * np.conj(Y)))
index1 = np.arange(1, mx1+1, 1) + (m2 - mx1 -1)
index2 = np.arange(1, mx1+2, 1) - 1
c = np.hstack((c1[index1], c1[index2]))
return c
if __name__ == "__main__":
s = time.clock()
a = [1, 2, 3, 4, 5]
b = [6, 7, 8, 9, 10]
c = xcorr(a, b, 3)
e = time.clock()
print(c)
print(e-c)
Take the results of a certain run as an exmple:
[ 29. 56. 90. 130. 110. 86. 59.]
0.0001745000000001884
comparing with MATLAB code:
clear;close all;clc
tic
a = [1, 2, 3, 4, 5];
b = [6, 7, 8, 9, 10];
c = xcorr(a, b, 3)
toc
29.0000 56.0000 90.0000 130.0000 110.0000 86.0000 59.0000
时间已过 0.000279 秒。
If anyone can give a strict mathematical derivation about this,that would be very helpful.
I think I have found a solution, as I was facing the same problem:
If you have two vectors x and y of any length N, and want a cross-correlation with a window of fixed len m, you can do:
x = <some_data>
y = <some_data>
# Trim your variables
x_short = x[window:]
y_short = y[window:]
# do two xcorrelations, lagging x and y respectively
left_xcorr = np.correlate(x, y_short) #defaults to 'valid'
right_xcorr = np.correlate(x_short, y) #defaults to 'valid'
# combine the xcorrelations
# note the first value of right_xcorr is the same as the last of left_xcorr
xcorr = np.concatenate(left_xcorr, right_xcorr[1:])
Remember you might need to normalise the variables if you want a bounded correlation
Here is another answer, sourced from here, seems faster on the margin than np.correlate and has the benefit of returning a normalised correlation:
def rolling_window(self, a, window):
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
def xcorr(self, x,y):
N=len(x)
M=len(y)
meany=np.mean(y)
stdy=np.std(np.asarray(y))
tmp=self.rolling_window(np.asarray(x),M)
c=np.sum((y-meany)*(tmp-np.reshape(np.mean(tmp,-1),(N-M+1,1))),-1)/(M*np.std(tmp,-1)*stdy)
return c
as I answered here, https://stackoverflow.com/a/47897581/5122657
matplotlib.xcorr has the maxlags param. It is actually a wrapper of the numpy.correlate, so there is no performance saving. Nevertheless it gives exactly the same result given by Matlab's cross-correlation function. Below I edited the code from matplotlib so that it will return only the correlation. The reason is that if we use matplotlib.corr as it is, it will return the plot as well. The problem is, if we put complex data type as the arguments into it, we will get "casting complex to real datatype" warning when matplotlib tries to draw the plot.
<!-- language: python -->
import numpy as np
import matplotlib.pyplot as plt
def xcorr(x, y, maxlags=10):
Nx = len(x)
if Nx != len(y):
raise ValueError('x and y must be equal length')
c = np.correlate(x, y, mode=2)
if maxlags is None:
maxlags = Nx - 1
if maxlags >= Nx or maxlags < 1:
raise ValueError('maxlags must be None or strictly positive < %d' % Nx)
c = c[Nx - 1 - maxlags:Nx + maxlags]
return c

Reshaping numpy array without using two for loops

I have two numpy arrays
import numpy as np
x = np.linspace(1e10, 1e12, num=50) # 50 values
y = np.linspace(1e5, 1e7, num=50) # 50 values
x.shape # output is (50,)
y.shape # output is (50,)
I would like to create a function which returns an array shaped (50,50) such that the first x value x0 is evaluated for all y values, etc.
The current function I am using is fairly complicated, so let's use an easier example. Let's say the function is
def func(x,y):
return x**2 + y**2
How do I shape this to be a (50,50) array? At the moment, it will output 50 values. Would you use a for loop inside an array?
Something like:
np.array([[func(x,y) for i in x] for j in y)
but without using two for loops. This takes forever to run.
EDIT: It has been requested I share my "complicated" function. Here it goes:
There is a data vector which is a 1D numpy array of 4000 measurements. There is also a "normalized_matrix", which is shaped (4000,4000)---it is nothing special, just a matrix with entry values of integers between 0 and 1, e.g. 0.5567878. These are the two "given" inputs.
My function returns the matrix multiplication product of transpose(datavector) * matrix * datavector, which is a single value.
Now, as you can see in the code, I have initialized two arrays, x and y, which pass through a series of "x parameters" and "y parameters". That is, what does func(x,y) return for value x1 and value y1, i.e. func(x1,y1)?
The shape of matrix1 is (50, 4000, 4000). The shape of matrix2 is (50, 4000, 4000). Ditto for total_matrix.
normalized_matrix is shape (4000,4000) and id_mat is shaped (4000,4000).
normalized_matrix
print normalized_matrix.shape #output (4000,4000)
data_vector = datarr
print datarr.shape #output (4000,)
def func(x, y):
matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
matrix2 = y[:, None, None] * id_mat[None, :, :]
total_matrix = matrix1 + matrix2
# transpose(datavector) * matrix * datavector
# by matrix multiplication, equals single value
return np.array([ np.dot(datarr.T, np.dot(total_matrix, datarr) ) ])
If I try to use np.meshgrid(), that is, if I try
x = np.linspace(1e10, 1e12, num=50) # 50 values
y = np.linspace(1e5, 1e7, num=50) # 50 values
X, Y = np.meshgrid(x,y)
z = func(X, Y)
I get the following value error: ValueError: operands could not be broadcast together with shapes (50,1,1,50) (1,4000,4000).
reshape in numpy as different meaning. When you start with a (100,) and change it to (5,20) or (10,10) 2d arrays, that is 'reshape. There is anumpy` function to do that.
You want to take 2 1d array, and use those to generate a 2d array from a function. This is like taking an outer product of the 2, passing all combinations of their values through your function.
Some sort of double loop is one way of doing this, whether it is with an explicit loop, or list comprehension. But speeding this up depends on that function.
For at x**2+y**2 example, it can be 'vectorized' quite easily:
In [40]: x=np.linspace(1e10,1e12,num=10)
In [45]: y=np.linspace(1e5,1e7,num=5)
In [46]: z = x[:,None]**2 + y[None,:]**2
In [47]: z.shape
Out[47]: (10, 5)
This takes advantage of numpy broadcasting. With the None, x is reshaped to (10,1) and y to (1,5), and the + takes an outer sum.
X,Y=np.meshgrid(x,y,indexing='ij') produces two (10,5) arrays that can be used the same way. Look at is doc for other parameters.
So if your more complex function can be written in a way that takes 2d arrays like this, it is easy to 'vectorize'.
But if that function must take 2 scalars, and return another scalar, then you are stuck with some sort of double loop.
A list comprehension form of the double loop is:
np.array([[x1**2+y1**2 for y1 in y] for x1 in x])
Another is:
z=np.empty((10,5))
for i in range(10):
for j in range(5):
z[i,j] = x[i]**2 + y[j]**2
This double loop can be sped up somewhat by using np.vectorize. This takes a user defined function, and returns one that can take broadcastable arrays:
In [65]: vprod=np.vectorize(lambda x,y: x**2+y**2)
In [66]: vprod(x[:,None],y[None,:]).shape
Out[66]: (10, 5)
Test that I've done in the past show that vectorize can improve on the list comprehension route by something like 20%, but the improvement is nothing like writing your function to work with 2d arrays in the first place.
By the way, this sort of 'vectorization' question has been asked many times on SO numpy. Beyond these broad examples, we can't help you without knowning more about that more complicated function. As long as it is a black box that takes scalars, the best we can help you with is np.vectorize. And you still need to understand broadcasting (with or without meshgrid help).
I think there is a better way, it is right on the tip of my tongue, but as an interim measure:
You are operating on 1x2 windows of a meshgrid. You can use as_strided from numpy.lib.stride_tricks to rearrange the meshgrid into two-element windows, then apply your function to the resultant array. I like to use a generic nd solution, sliding_windows (http://www.johnvinyard.com/blog/?p=268) (Not mine) to transform the array.
import numpy as np
a = np.array([1,2,3])
b = np.array([.1, .2, .3])
z= np.array(np.meshgrid(a,b))
def foo((x,y)):
return x+y
>>> z.shape
(2, 3, 3)
>>> t = sliding_window(z, (2,1,1))
>>> t
array([[ 1. , 0.1],
[ 2. , 0.1],
[ 3. , 0.1],
[ 1. , 0.2],
[ 2. , 0.2],
[ 3. , 0.2],
[ 1. , 0.3],
[ 2. , 0.3],
[ 3. , 0.3]])
>>> v = np.apply_along_axis(foo, 1, t)
>>> v
array([ 1.1, 2.1, 3.1, 1.2, 2.2, 3.2, 1.3, 2.3, 3.3])
>>> v.reshape((len(a), len(b)))
array([[ 1.1, 2.1, 3.1],
[ 1.2, 2.2, 3.2],
[ 1.3, 2.3, 3.3]])
>>>
This should be somewhat faster.
You may need to modify your function's argument signature.
If the link to the johnvinyard.com blog breaks, I've posted the the sliding_window implementation in other SO answers - https://stackoverflow.com/a/22749434/2823755
Search around and you'll find many other tricky as_strided solutions.
In response to your edited question:
normalized_matrix
print normalized_matrix.shape #output (4000,4000)
data_vector = datarr
print datarr.shape #output (4000,)
def func(x, y):
matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
matrix2 = y[:, None, None] * id_mat[None, :, :]
total_matrix = matrix1 + matrix2
# transpose(datavector) * matrix * datavector
# by matrix multiplication, equals single value
# return np.array([ np.dot(datarr.T, np.dot(total_matrix, datarr))])
return np.einsum('j,ijk,k->i',datarr,total_matrix,datarr)
Since datarr is shape (4000,), transpose does nothing. I believe you want the result of the 2 dots to be shape (50,). I'm suggesting using einsum. But it can be done with tensordot, or I think even np.dot(np.dot(total_matrix, datarr),datarr). Test the expression with smaller arrays, focusing on getting the shapes right.
x = np.linspace(1e10, 1e12, num=50) # 50 values
y = np.linspace(1e5, 1e7, num=50) # 50 values
z = func(x,y)
# X, Y = np.meshgrid(x,y)
# z = func(X, Y)
X,Y is wrong. func takes x and y that are 1d. Notice how you expand the dimensions with [:, None, None]. Also you aren't creating a 2d array from an outer combination of x and y. None of your arrays in func is (50,50) or (50,50,...). The higher dimensions are provided by nomalied_matrix and id_mat.
When showing us the ValueError you should also indicate where in your code that occurred. Otherwise we have to guess, or recreate the code ourselves.
In fact when I run my edited func(X,Y), I get this error:
----> 2 matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
3 matrix2 = y[:, None, None] * id_mat[None, :, :]
4 total_matrix = matrix1 + matrix2
5 # transpose(datavector) * matrix * datavector
ValueError: operands could not be broadcast together with shapes (50,1,1,50) (1,400,400)
See, the error occurs right at the start. normalized_matrix is expanded to (1,400,400) [I'm using smaller examples]. The (50,50) X is expanded to (50,1,1,50). x expands to (50,1,1), which broadcasts just fine.
To address the edit and the broadcasting error in the edit:
Inside your function you are adding dimensions to arrays to try to get them to broadcast.
matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
This expression looks like you want to broadcast a 1d array with a 2d array.
The results of your meshgrid are two 2d arrays:
X,Y = np.meshgrid(x,y)
>>> X.shape, Y.shape
((50, 50), (50, 50))
>>>
When you try to use X in in your broadcasting expression the dimensions don't line up, that is what causes the ValueError - refer to the General Broadcasting Rules:
>>> x1 = X[:, np.newaxis, np.newaxis]
>>> nm = normalized_matrix[np.newaxis, :, :]
>>> x1.shape
(50, 1, 1, 50)
>>> nm.shape
(1, 4000, 4000)
>>>
You're on the right track with your list comprehension, you just need to add in an extra level of iteration:
np.array([[func(i,j) for i in x] for j in y])

Efficiently generating a Cauchy matrix from two Numpy arrays

A Cauchy matrix (Wikipedia article) is a matrix determined by two vectors (arrays of numbers). Given two vectors x and y, the Cauchy matrix C generated by them is defined entry-wise as
C[i][j] := 1/(x[i] - y[j])
Given two Numpy arrays x and y, what is an efficient way to generate a Cauchy matrix?
This is the most efficient way I found, using array broadcasting to take advantage of vectorization.
1.0 / (x.reshape((-1,1)) - y)
Edit: #HYRY and #shx2 have suggested that, instead of x.reshape((-1,1)), which makes a copy, you can use x[:,np.newaxis], which returns a view of the same array. #HYRY also suggests 1.0/np.subtract.outer(x,y), which is slightly slower for me but maybe more explicit.
Example:
>>> x = numpy.array([1,2,3,4]) #x
>>> y = numpy.array([5,6,7]) #y
>>>
>>> #transpose x, to nx1
... x = x.reshape((-1,1))
>>> x
array([[1],
[2],
[3],
[4]])
>>>
>>> #array of differences x[i] - y[j]
... #an nx1 array minus a 1xm array is an nxm array
... diff_matrix = x-y
>>> diff_matrix
array([[-4, -5, -6],
[-3, -4, -5],
[-2, -3, -4],
[-1, -2, -3]])
>>>
>>> #apply the multiplicative inverse to each entry
... cauchym = 1.0/diff_matrix
>>> cauchym
array([[-0.25 , -0.2 , -0.16666667],
[-0.33333333, -0.25 , -0.2 ],
[-0.5 , -0.33333333, -0.25 ],
[-1. , -0.5 , -0.33333333]])
I tried a few other methods, all of which were significantly slower.
This is the naive approach, which costs list comprehension:
cauchym = numpy.array([[ 1.0/(x_i-y_j) for y_j in y] for x_i in x])
This one generates the matrix as a 1-dimensional array (saving the cost of nested Python lists) and reshapes it to a matrix afterward. It also moves the division to a single Numpy operation:
cauchym = 1.0/numpy.array([(x_i-y_j) for x_i in x for y_j in y]).reshape([len(x),len(y)])
Using numpy.repeat and numpy.tile (which respectively tile the array horizontally and vertically). This way makes unnecessary copies:
lenx = len(x)
leny = len(y)
xm = numpy.repeat(x,leny) #the i'th row is s_i
ym = numpy.tile(y,lenx)
cauchym = (1.0/(xm-ym)).reshape([lenx,leny]);
I created a function hope it helps u to understand in a better way.
# Creating a function in order to form a cauchy matrix
def cauchy_matrix(arr1,arr2):
"""
Enter two arrays in order to get a cauchy matrix.The input array should be a 1-D array.
arr1 = First 1-D array
arr2 = Second 1-D array
It returns the cauchy matrix having shape equal to m*n, where m is size of arr1 and n is size of arr2.
"""
my_list = []
try:
for i in range(len(arr1)):
for j in range(len(arr2)):
z = 1/(arr1[i]-arr2[j])
my_list.append(z)
return np.array(my_list).reshape(arr1.shape[0],arr2.shape[0])
except ZeroDivisionError:
print("Check if both the arrays has '0' as one of it's element. One array can have a zero but both the arrays having '0' is not acceptable!")

Categories

Resources