cross correlation C++ and python - python

While trying to do a cross correlation in C++ with 1D vectors and already having a python example, using 2D arrays i have noticed that the end results differ for both methods.
Since i do not know too much about FFT and cross correlation i am wondering if that is normal?
The input data is the same, but i use it differently, as 1D vectors C++ and 2D arrays for python
On the C++ side i have 2, 1D vectors one with size 144 and one 6.
The process is the following:
apply FFT on the bigger one, and return the transform (padded to 0) of size 256
padd to 0 the second one (to be the size of the bigger one) and apply FFT
the actual cross correlation: result vector = the first one * conjugate(second one)
apply an IFFT on the result
The result is a 256 (16x16 if i have to see it as 2D) sized vector with values that can go up to
40.7079 or down to 2.0316e-320.
On the python side i have 2, 2D array (one 12x12, one 2x3) and i am
doing only one step:
1.correlation_result= signal.correlate2d (bigArray,smallArray)
The result in this case is a 13x14 2D array with values not that
extreme as i have on C++ side.
Is that normal, something is missing or do i have to do something else to them?

Related

Vectorizing Computation of Cosine Similarity Matrix

I have a matrix of 63695 row vectors of dim 384.
I would like to compute the cosine similarity model for this matrix.
I was thinking of vectorizing it.
How would one proceed to that objective?
If you look in scikit-learns source code you will see that X and Y are first normalized and then X_norm # Y_norm.T (dot product) is returned. Or if as in your case no Y exists it is X_norm # X_norm.T.
Normalizing and transposing can be discarded when looking at the runtime, but the matrix multiplaction of a (63695 x 384) matrix should take somewhere in the neighbourhood of 63695*63695 (elements in result matrix) times 384*384 (element-wise multiplactions and additions to calculate one element) calculations, so something like 63695*63695*384*384 = 598,236,810,854,400 operations. (Or strictly, that number of multiplications plus that same number of additions.)
And as you already mentioned it requires 4 (Bytes for float32) * 63695 * 63695 = ~16.2 GB of memory to handle that result matrix.
Do you really need that enormous matrix? What type of data are you handling and what are you trying to do? If we are talking about e.g. vector represenations of text data then you should look at removing duplicates, processing it in chunks or reducing the dimensionality before analysing similarity. If you are looking for something like ranking these cosine similarities and finding then k most similar ones you'd be much better of using algorithms for finding similar data points instead of doing it all by hand yourself.

Python, fast computation of rolling percentile

Given a multidimensional array, I want to compute a rolling percentile over one of its axes, with the rolling windows truncated near the boundaries of the array. Below is a minimal example implementation using only numpy via np.nanpercentile() applied to stacked, rolled (through np.roll()) arrays. However, the input array may be very large (~ 1 GB or more), so two issues arise:
For the current implementation, the stacked, rolled array may
not fit into RAM memory. Avoidable with for-loops over all axes
unaffected by the rolling, but may be slow.
Even fully vectorized (as below), the computation time is quite long,
understandably due to the sheer amount of computations performed.
Questions: Is there a more efficient python implementation of a rolling percentile (with axis/axes argument or the like and with truncated windows near the boundaries)?** If not, how could the computation be sped up (and, if possible, without exceeding the RAM)? C-code called from Python? Computation of percentiles at fewer "central" points, and approximation in between via (e.g. linear) interpolation? Other ideas?
Related post (implementing rolling percentiles): How to compute moving (or rolling, if you will) percentile/quantile for a 1d array in numpy? Issues are:
pandas implementation via pd.Series().rolling().quantile() works only for pd.Series or pd.DataFrame objects, not multidimensional (4D or arbitrary D) arrays;
implementation via np.lib.stride_tricks.as_strided() with np.nanpercentile() is similar to the one below and should not be much faster given that np.nanpercentile() is the speed bottleneck, see below
Minimal example implementation:
import numpy as np
np.random.seed(100)
# random array of numbers
a = np.random.rand(10000,1,70,70)
# size of rolling window
n_window = 150
# percentile to compute
p = 0.7
# NaN values to prepend/append to array before rolling
nan_temp = np.full(tuple([n_window] + list(np.array(a.shape)[1:])), fill_value=np.nan)
# prepend and append NaN values to array
a_temp = np.concatenate((nan_temp, a, nan_temp), axis=0)
# roll array, stack rolled arrays along new dimension, compute percentile (ignoring NaNs) using np.nanpercentile()
res = np.nanpercentile(np.concatenate([np.roll(a_temp, shift=i, axis=0)[...,None] for i in range(-n_window, n_window+1)],axis=-1),p*100,axis=-1)
# cut away the prepended/appended NaN values
res = res[n_window:-n_window]
Computation times (in seconds), example (for the case of a having a shape of (1000,1,70,70) instead of (10000,1,70,70)):
create random array: 0.0688176155090332
prepend/append NaN values: 0.03478217124938965
stack rolled arrays: 38.17830514907837
compute nanpercentile: 1145.1418626308441
cut out result: 0.0004646778106689453

Remove column from a 3D array with varied length for every first-level index (Python)

I got a np.ndarray with ~3000 trajectories. Each trajectory has x, y and z coordinates and a different length; between 150 and 250 (points in time). Now I want to remove the z coordinate for all of these trajectories.
So arr.shape gives me (3000,),(3000 trajectories) and (for example) arr[0].shape yields (3,178) (three axis of coordinates and 178 values).
I have found multiple explanations for removing lines in 2D-arrays and I found np.delete(arr[0], 2, axis=0) working for me. However, I don't just want to delete the z coordinates for the first trajectory; I want to do this for every trajectory.
If I want to do this with a loop for arr[i] I would need to know the exact length of every trajectory (It doesn't suit my purpose to just create the array with the length of the longest and fill it up with zeroes).
TL;DR: So how do I get from a ndarray with [amountOfTrajectories][3][value] to [amountOfTrajectories][2][value]?
The purpose is to use these trajectories as labels for a neural net that creates trajectories. So I guess it's a entirely new question but is the shape I'm asking for suitable for usage as labels for tensorflow?
Also: What would have been a better title and some terms to find results for this with google? I just started with Python and I'm afraid I'm missing some keywords here...
If this comes from loadmat, the source is probably a MATLAB workspace with a cell, which contains these matrices.
loadmat has, evidently created a 1d array of object dtype (the equivalent of a cell, with squeeze on).
A 1d object array is similar to a Python list - it contains pointers to arrays else where in memory. Most operations on such an array use Python iteration. Iterating on the equivalent list is usually faster. (arr.tolist()).
alist = [a[:2,:] for a in arr]
should give you a list of arrays, each of shape (2, n) (n varying). This makes new arrays - but then so does np.delete.
You can't operate on all arrays in the 1d array with one operation. It has to be iterative.

3d image compression with numpy

I have a 3d numpy array representing an object with cells as voxels and the voxels having values from 1 to 10. I would like to compress the image (a) to make it smaller and (b) to get a quick idea later on of how complex the image is by compressing it to a minimum level of agreement with the original image.
I have used SVD to do this with 2D images and seeing how many singular values were required but it looks to have difficulty with 3D ones. If e.g. I look at the diagonal terms in the S matrix, they are all zero and I was expecting singular values.
Is there any way I can use svd to compress 3D arrays (e.g. flattening in some way)? Or are other methods more appropriate? If necessary I could probably simplify the voxel values to 0 or 1.
You could essentially apply the same principle to the 3D data without flattening it. There are some algorithms to separate N-dimensional matrices, such as the CP-ALS (using Alternating Least Squares) and this is implemented in the package sktensor. You can use the package to decompose the tensor given a rank:
from sktensor import dtensor, cp_als
T = dtensor(X)
rank = 5
P, fit, itr, exectimes = cp_als(T, rank, init='random')
With X being your data. You could then use the weights weights = P.lmbda to reconstruct the original array X and calculate the reconstruction error, as you would do with SVD.
Other decomposition methods for 3D data (or in general tensors) include the Tucker Decomposition or the Canonical Decomposition (also available in the same package).
It is not directly a 3D SVD, but all the methods above can be used to analyze the principal components of your data.
Find bellow (just for completeness) an image of the tucker decomposition:
And bellow another image of the decomposition that CP-ALS (optimization algorithm) tries to obtain:
Image credits to:
1- http://www.slideshare.net/KoheiHayashi1/talk-in-jokyonokai-12989223
2- http://www.bsp.brain.riken.jp/~zhougx/tensor.html
What you want is a higher order svd/Tucker decomposition.
In the 3D case, you will get three projection matrices (one for each dimension) and a low rank core tensor (a 3D array).
You can do this easily using TensorLy:
from tensorly.decomposition import tucker
core, factors = tucker(tensor, ranks=[2, 3, 4])
Here, core will have shape (2, 3, 4) and len(factors) will be 3, one factor for each dimension.

Python: Reshaping arrays and lists

I have a numpy ndarray object with the following shape:
(3, 256, 170, 256).
So, basically this represents an array of 3-dimensional vectors. The dimension of the vector is the first element as it enables one to write something like: array[0] for the relevant vector component.
Now, I am trying to use scipy pdist function, which computes the distance between the entries. So, I need to modify this array, so that it can be represented as a two dimensional matrix, where the number of rows is 256*170*256 and the number of columns is 3 and pdist should return me the matrix where each element is the squared distance between the corresponding 3 dimensional vectors (if I have interpreted the documentation correctly).
Can someone tell me how I can get a view into this numpy array, so that I can generate this matrix. I do not want to copy the data again (as these matrices can be quite large), so looking for some efficient solutions.

Categories

Resources