I have a 3d numpy array representing an object with cells as voxels and the voxels having values from 1 to 10. I would like to compress the image (a) to make it smaller and (b) to get a quick idea later on of how complex the image is by compressing it to a minimum level of agreement with the original image.
I have used SVD to do this with 2D images and seeing how many singular values were required but it looks to have difficulty with 3D ones. If e.g. I look at the diagonal terms in the S matrix, they are all zero and I was expecting singular values.
Is there any way I can use svd to compress 3D arrays (e.g. flattening in some way)? Or are other methods more appropriate? If necessary I could probably simplify the voxel values to 0 or 1.
You could essentially apply the same principle to the 3D data without flattening it. There are some algorithms to separate N-dimensional matrices, such as the CP-ALS (using Alternating Least Squares) and this is implemented in the package sktensor. You can use the package to decompose the tensor given a rank:
from sktensor import dtensor, cp_als
T = dtensor(X)
rank = 5
P, fit, itr, exectimes = cp_als(T, rank, init='random')
With X being your data. You could then use the weights weights = P.lmbda to reconstruct the original array X and calculate the reconstruction error, as you would do with SVD.
Other decomposition methods for 3D data (or in general tensors) include the Tucker Decomposition or the Canonical Decomposition (also available in the same package).
It is not directly a 3D SVD, but all the methods above can be used to analyze the principal components of your data.
Find bellow (just for completeness) an image of the tucker decomposition:
And bellow another image of the decomposition that CP-ALS (optimization algorithm) tries to obtain:
Image credits to:
1- http://www.slideshare.net/KoheiHayashi1/talk-in-jokyonokai-12989223
2- http://www.bsp.brain.riken.jp/~zhougx/tensor.html
What you want is a higher order svd/Tucker decomposition.
In the 3D case, you will get three projection matrices (one for each dimension) and a low rank core tensor (a 3D array).
You can do this easily using TensorLy:
from tensorly.decomposition import tucker
core, factors = tucker(tensor, ranks=[2, 3, 4])
Here, core will have shape (2, 3, 4) and len(factors) will be 3, one factor for each dimension.
Related
While trying to do a cross correlation in C++ with 1D vectors and already having a python example, using 2D arrays i have noticed that the end results differ for both methods.
Since i do not know too much about FFT and cross correlation i am wondering if that is normal?
The input data is the same, but i use it differently, as 1D vectors C++ and 2D arrays for python
On the C++ side i have 2, 1D vectors one with size 144 and one 6.
The process is the following:
apply FFT on the bigger one, and return the transform (padded to 0) of size 256
padd to 0 the second one (to be the size of the bigger one) and apply FFT
the actual cross correlation: result vector = the first one * conjugate(second one)
apply an IFFT on the result
The result is a 256 (16x16 if i have to see it as 2D) sized vector with values that can go up to
40.7079 or down to 2.0316e-320.
On the python side i have 2, 2D array (one 12x12, one 2x3) and i am
doing only one step:
1.correlation_result= signal.correlate2d (bigArray,smallArray)
The result in this case is a 13x14 2D array with values not that
extreme as i have on C++ side.
Is that normal, something is missing or do i have to do something else to them?
My problem is this: I have GMM model with K multi-variate gaussians, and also I have N samples.
I want to create a N*K numpy matrix, which in it's [i,k] cell there is the pdf function of the k'th gaussian on the i'th sample, i.e. in this cell there is
In short, I'm intrested in the following matrix:
pdf matrix
This what I have now (I'm working with python):
Q = np.array([scipy.stats.multivariate_normal(mu_t[k], cov_t[k]).pdf(X) for k in range(self.K)]).T
X in the code is a matrix whose lines are my samples.
It's works fine on small toy dataset from small dimension, but the dataset I'm working with is 10,000 28*28 pictures, and on it this line run extremely slowly...
I want to find a solution that doesn't envolve loops but only vector\matrix operation (i.e. vectorization). The scipy 'multivariate_normal' function cannot parameters of more than 1 gaussians, as far as I understand it (but it's 'pdf' function can calculates on multiple samples at once).
Does someone have an Idea?
I am afraid, that the main speed killer in your problem is the inversion and deteminant calculation for the cov_t matrices. If you somehow managed to precalculate these, you could enroll the calculation and use np.add.outer to get all combinations of x_i - mu_k and then use array comprehension to calculate the probabilities with the full formula of the normal distribution function.
Try
S = np.add.outer(X,-mu_t)
cov_t_inv = ??
cov_t_inv_det = ??
Q = 1/(2*np.pi*cov_t_inv_det)**0.5 * np.exp(-0.5*np.einsum('ikr,krs,kis->ik',S,cov_t_inv,S))
Where you insert precalculated arrays cov_t_inv for the inverse covariance matrices and cov_t_inv_det for their determinants.
I have two sets of exposure-bracketed images of a color chart from two different camera systems, A and B. Each data set, at a given exposure, gives me 24 RGB tuples from the patches on the color chart.
I want to match camera B to camera A through a 3-dimensional transform via an interpolation of these two data sets. This is basically the process of creating a look-up table. The methods for parsing and applying LUTs to existing images are well-documented, but I cannot find good resources on how to analytically create a LUT given two different data sets. I know that the answer involves interpolation through a sparse data set, and could involve something like trilinear interpolation, but I'm not sure about the actual execution.
For example, taking the case of trilinear interpolation, it expects 8 corners, but in the case of matching image A to image B, what do those 8 corners consist of? The closest hits to the given pixel in all dimensions? Searching through an unordered data set for close values seems expensive and not correct.
Overall, I'm looking for some advice on how to proceed to match two images with the data set I've acquired, specifically with a 3d transformation. Preferred tool is Python but it can be anything non-proprietary.
In the first place, you need to establish the correspondences, i.e. associate the patches of the same color in the two images (much to say about this but not in the scope of this answer). And get the RGB color values (preferably by averaging over the patches to reduce random fluctuations).
Now you have a set of N pairs of RGB triples, to which you want to fit a mathematical model,
RGB' = f(RGB)
(f is a vector function of a vector argument).
To begin with, you should try an affine model,
RGB' = A RGB + RGB0
where A is a 3x3 matrix and RGB0 a constant vector. Notice that in the affine case, the equations are independent, like
R' = Ar RGB + R0
G' = Ag RGB + G0
B' = Ab RGB + B0
where Ar, Ag, Ab are vectors.
There are twelve unknown coefficients so you need N≥4. If N>4, you can resort to least-squares fitting, also easy in the linear case.
In case the affine model is insufficient, you can try a polynomial model such as a quadric one (requires N≥10).
Say I have orthogonal vectors of dimension n. I have two questions:
How to create/initialize n such orthogonal vectors in python using the existing packages (numpy, scipy, pytorch etc)? Ideally these basis vectors should be as random as possible given the constraints, that is avoiding values such as 1,0,-1 as much as possible.
How can I rotate them by an angle alpha so that they remain orthogonal in high dimensional space? Again, I would like to do this in python, preferably using existing implementation in some of the packages.
You could do a QR decomposition of a random matrix, and set the R-component to zero. This will yield a random orthogonal matrix.
Vary one of the Givens angles in the Q components and you get a random rotation.
I have an answer to your first question and some thoughts on how to approach the second.
1.
import numpy as np
#let's say we're working in 5-D space
n = 5
#set of orthogonal basis vectors
basis_vectors = []
for _ in range(n):
vector = np.random.randn(n)
for basis_vector in basis_vectors:
vector -= basis_vector.dot(vector) * vector
#uncomment following to make basis orthonormal
#vector /= np.linalg.norm(rotation_axis)
basis_vectors.append(vector)
for a_i in range(n):
for b_i (a_i + 1, n):
assert np.allclose(basis_vectors[a_i].dot(basis_vectors[b_i]), 0)
Because you want to rotate both vectors in the same manner, there must be a way to preserve information on the way each rotation is carried out (e.g. rotation matrix, rotation quaternion).
Preexisting implementations of 3D Rotation matrices include the Scipy function scipy.spatial.transform.Rotation.from_rotvec and Python's quaternion module (see henneray's answer), but these are only for 3D vectors. Unless I've overlooked something, it'd be necessary to implement ND rotation from scratch.
Here's a general outline of the steps I would take:
Find 2 linearly independent ND basis vectors of the 2D plane in which you want to rotate the two vectors. (the vectors you want to rotate, a and b, aren't necessarily on this plane)
Find the remaining (N-2)D basis vectors that are linearly independent to these first 2 vectors. Combined the N basis vectors should span the ND space.
Break up each of the two N-D orthogonal vectors you want to rotate into the sum of two vectors: 1) the vectors' projections onto the 2D plane you've constructed and 2) the "remainder" of the vector that doesn't fall on the 2D plane. Set this "remainder" aside for now.
Perform a change of basis on the projected N-D vectors so that they can be expressed as the product of a 2D vector and an Nx2 matrix, which has its columns set to each of the corresponding basis vectors calculated. Keep in mind that the 2D vector is now in a modified coordinate space, not the original.
Construct the 2D rotation matrix corresponding to the desired rotation within the 2D plane identified in the first step. Perform the rotation transformation on the 2D vectors.
Transform the rotated 2D vectors back into ND vectors in the main coordinate system by multiplying the by the Nx2 matrix.
Add the "remainder" set aside earlier back to the mapped ND vector.
The resulting two vectors have been rotated by an arbitrary angle on a particular 2D plane, but maintain orthogonality.
I hope these ideas help you. Take care.
I found a scipy function that can do 1, ortho_group, still wondering about 2.
>>> from scipy.stats import ortho_group
>>> m = ortho_group.rvs(dim=4)
>>> m
array([[-0.25952499, 0.435163 , 0.04561972, 0.86092902],
[-0.44123728, -0.38814758, -0.80217271, 0.10568846],
[ 0.16909943, -0.80707234, 0.35548632, 0.44007851],
[-0.8422362 , -0.0927839 , 0.47756387, -0.23229737]])
>>> m.dot(m.T)
array([[ 1.00000000e+00, -1.68203864e-16, 1.75471554e-16,
9.74154717e-17],
[-1.68203864e-16, 1.00000000e+00, -1.18506045e-16,
-1.81879209e-16],
[ 1.75471554e-16, -1.18506045e-16, 1.00000000e+00,
1.16692720e-16],
[ 9.74154717e-17, -1.81879209e-16, 1.16692720e-16,
1.00000000e+00]])
I have two M X N matrices which I construct after extracting data from images. Both the vectors have lengthy first row and after the 3rd row they all become only first column.
for example raw vector looks like this
1,23,2,5,6,2,2,6,2,
12,4,5,5,
1,2,4,
1,
2,
2
:
Both vectors have a similar pattern where first three rows have lengthy row and then thin out as it progress. Do do cosine similarity I was thinking to use a padding technique to add zeros and make these two vectors N X N. I looked at Python options of cosine similarity but some examples were using a package call numpy. I couldn't figure out how exactly numpy can do this type of padding and carry out a cosine similarity. Any guidance would be greatly appreciated.
If both arrays have the same dimension, I would flatten them using NumPy. NumPy (and SciPy) is a powerful scientific computational tool that makes matrix manipulations way easier.
Here an example of how I would do it with NumPy and SciPy:
import numpy as np
from scipy.spatial import distance
A = np.array([[1,23,2,5,6,2,2,6,2],[12,4,5,5],[1,2,4],[1],[2],[2]], dtype=object )
B = np.array([[1,23,2,5,6,2,2,6,2],[12,4,5,5],[1,2,4],[1],[2],[2]], dtype=object )
Aflat = np.hstack(A)
Bflat = np.hstack(B)
dist = distance.cosine(Aflat, Bflat)
The result here is dist = 1.10e-16 (i.e., 0).
Note that I've used here the dtype=object because that's the only way I know to be able to store different shapes into an array in NumPy. That's why later I used hstack() in order to flatten the array (instead of using the more common flatten() function).
I would make them into a scipy sparse matrix (http://docs.scipy.org/doc/scipy/reference/sparse.html) and then run cosine similarity from the scikit learn module.
from scipy import sparse
sparse_matrix= scipy.sparse.csr_matrix(your_np_array)
from sklearn.metrics import pairwise_distances
from scipy.spatial.distance import cosine
distance_matrix= pairwise_distances(sparse_matrix, metric="cosine")
Why cant you just run a nested loop over both jagged lists (presumably), summating each row using Euclidian/vector dot product and using the result as a similarity measure. This assumes that the jagged dimensions are identical.
Although I'm not quite sure how you are getting a jagged array from a bitmap image (I would of assumed it would be a proper dense matrix of MxN form) or how the jagged array of arrays above is meant to represent an MxN matrix/image data, and therefore, how padding the data with zeros would make sense? If this was a sparse matrix representation, one would expect row/col information annotated with the values.