I am new to coding and Python and I would be grateful if someone could help me with this problem. I want to define a 200x1 matrix every cell of which is a 8X17 matrix and the cells of this one are 1x3 vectors.
The values in the 1x3 vectors are generated randomly (from different sets of data). I want to run a genetic algorithm and this would define 200 members of my initial populations.
I have tried this in MATLAB using cell arrays; but they significantly slow down my code.
This is what I have tried in MATLAB.
P = cell (200, 1);
for i = 1:200
P{i,1} = cell(8, 17);
for j = 1:8
for k = 1:17
P{i,1}{j,k} = zeros(1,3);
P{i,1}{j,k}(1,1) = randi(Hmax{j}(k));
P{i,1}{j,k}(1,2) = randi([Hmin{j}(k),Hmax{j}(k)]);
P{i,1}{j,k}(1,3) = randi(Wmax{j}(k));
end
end
end
Hmin, Hmax, and Wmax are 8x17 matrices of data which have been imported as cell arrays. They are simple numbers which provide a range for the randomly generated values in 1x3 vectors.
And for those wondering what MATLAB cell arrays are, see
Cell Arrays.
Related
My problem is this: I have GMM model with K multi-variate gaussians, and also I have N samples.
I want to create a N*K numpy matrix, which in it's [i,k] cell there is the pdf function of the k'th gaussian on the i'th sample, i.e. in this cell there is
In short, I'm intrested in the following matrix:
pdf matrix
This what I have now (I'm working with python):
Q = np.array([scipy.stats.multivariate_normal(mu_t[k], cov_t[k]).pdf(X) for k in range(self.K)]).T
X in the code is a matrix whose lines are my samples.
It's works fine on small toy dataset from small dimension, but the dataset I'm working with is 10,000 28*28 pictures, and on it this line run extremely slowly...
I want to find a solution that doesn't envolve loops but only vector\matrix operation (i.e. vectorization). The scipy 'multivariate_normal' function cannot parameters of more than 1 gaussians, as far as I understand it (but it's 'pdf' function can calculates on multiple samples at once).
Does someone have an Idea?
I am afraid, that the main speed killer in your problem is the inversion and deteminant calculation for the cov_t matrices. If you somehow managed to precalculate these, you could enroll the calculation and use np.add.outer to get all combinations of x_i - mu_k and then use array comprehension to calculate the probabilities with the full formula of the normal distribution function.
Try
S = np.add.outer(X,-mu_t)
cov_t_inv = ??
cov_t_inv_det = ??
Q = 1/(2*np.pi*cov_t_inv_det)**0.5 * np.exp(-0.5*np.einsum('ikr,krs,kis->ik',S,cov_t_inv,S))
Where you insert precalculated arrays cov_t_inv for the inverse covariance matrices and cov_t_inv_det for their determinants.
First of all, I apologize for being an absolute beginner in both python and numpy. Please forgive my ignorance.
I have a 4D cube of pressure measurements where the dimensions are (number of samples, time, y-axis, x-axis), which means, for each sample, I have a 3D cube of spatio-temporal profile. I need to collect the pressure readings of this 3D cube (time, y-axis, x-axis) and store it into an array for each sample only where the coordinates satisfy a specific condition. Upon varying the specific condition, the size of this array will vary too. So, I have to use append() to build this array. However, since say for 1000 samples, I have to search through more than a millions coordinates using For-Loop for each sample, the code I have written is pretty inefficient and takes a lot of time to run (more than several hours). Can you please help me to write it more efficiently?
Below is the code I've tried to solve the problem. It works nicely and gives expected result but it is extremely slow.
import numpy as np
# Number of sample points in x,y and t-axis
Nx = 101
Ny = 101
Nt = 100
n_train = 1000
target_array = []
for i_train in range (n_train):
for k in range (Nt):
for j in range (Ny):
for i in range (Nx):
if np.round(np.sqrt((i-np.round(Nx/2))**2+(j-np.round(Ny/2))**2)) == 2*k:
target_array.append(Pressure[i_train,k,j,i])
Since the condition involves the indexes and not the values of your 4D array, you can vectorize it using numpy.meshgrid.
Here pp is your 4D array:
iv, jv, kv = np.meshgrid(np.arange(pp.shape[3]), np.arange(pp.shape[2]), np.arange(pp.shape[1]))
selecting = np.round(np.sqrt((iv - np.round(pp.shape[3]/2))**2 + (jv - np.round(pp.shape[2]/2))**2)) == 2*kv
target = pp[:,selecting]
Provided that I've understood correctly how your 4D array is organized:
the arrays created by meshgrid hold the indexes to select pp elements on the 3 dimensions x, y, t.
selecting is a boolean array created by replicating your equation, to check which coordinates satisfies the condition.
target is a selection of pp, taking all element on 0 axis which satisfies the condition (i.e. selecting is True) on the other 3 axes.
Note that target is a 2D array, to have a 1D array, use target.flatten().
I have a function the gives a matrix as a result, since im using a for loop and append the results are 20 matrices in an array. I would like to add up the lower and the upper values of every matrix. np.sum(np.tril(matrix, -1)) will add up the values of all the matrices. Is it possible to do it per matrix? Or can i get 20 seperate matrices to do this?
matrix = []
for i in clubs:
matrix.append(simulate_match(poisson_model, 'ARSENAL', i, max_goals=10))
My goal here is to build the sparse CSR matrix very fast. It is currently the main bottleneck in my process, and I've already optimized it by constructing the coo matrix relatively fast, and then using tocsr().
However, I would imagine that constructing the csr matrix directly must be faster?
I have a very specific format of a sparse matrix that is also large (i.e. on orders of 100000x50000). I've looked online at these other answers, but most are not addressing the question I have.
Efficiently construct FEM/FVM matrix
Looks at constructing a very specific formatted sparse matrix vs using coo, which led to a scipy merge improvement on the speed of tocsr().
Sparse Matrix Structure:
The sparse matrix, H, is comprised of W lists of size N, or built from an initial array of size NxW, lets call it A. Along the diagonal are repeating lists of size N for N times. So for the first N rows of H, is A[:,0] repeated, but sliding along N steps for each row.
Comparison to COO.tocsr()
When I scale up N and W, and build up the COO matrix first, then running tocsr(), it is actually faster then just building the CSR matrix directly. I'm not sure why this would be the case? I am wondering if perhaps I can take advantage of the structure of my sparse matrix H in some way? Since there are many repeating elements in there.
Code Sample
Here is a code sample to visualize what is going on for a small sample size:
from scipy.sparse import linalg, dok_matrix, coo_matrix, csr_matrix
import numpy as np
import matplotlib.pyplot as plt
def test_csr(testdata):
indices = [x for _ in range(W-1) for x in range(N**2)]
ptrs = [N*(i) for i in range(N*(W-1))]
ptrs.append(len(indices))
data = []
# loop along the first axis
for i in range(W-1):
vec = testdata[:,i].squeeze()
# repeat vector N times
for i in range(N):
data.extend(vec)
Hshape = ((N*(W-1), N**2))
H = csr_matrix((data, indices, ptrs), shape=Hshape)
return H
N = 4
W = 8
testdata = np.random.randn(N,W)
print(testdata.shape)
H = test_csr(testdata)
plt.imshow(H.toarray(), cmap='jet')
plt.show()
It looks like your output has only the first W-1 rows of test data. I'm not sure if this is intentional or not. My solutions assumes you want to use all of testdata.
When you construct the COO matrix are you also constructing the indices and data in a similar way?
One thing which might speed up constructing the csr_matrix is to use built in numpy functions to generate the data for the csr_matrix rather than python loops and arrays. I would expect this to improve the speed of generating the indices significantly. You can adjust the dtype to different type of int depending on the size of your matrix.
N = 4
W = 8
testdata = np.random.randn(N,W)
ptrs = N*np.arange(W*N+1,dtype='int')
indices = np.tile(np.arange(N*N,dtype='int'),W)
data = np.tile(testdata,N).flatten()
Hshape = ((N*W, N**2))
H = csr_matrix((data, indices, ptrs), shape=Hshape)
Another possibility is to first construct the large array, and then define each of the N vertical column blocks at once. This means that you don't need to make a ton of copies of the original data before you put it into the sparse matrix. However, it may be slow to convert the matrix type.
N = 4
W = 8
testdata = np.random.randn(N,W)
Hshape = ((N*W, N**2))
H = sp.sparse.lol_matrix(Hshape)
for j in range(N):
H[N*np.arange(W)+j,N*j:N*(j+1)] = testdata.T
H = H.tocsc()
I have a set of numpy.arrays of NXM (two dimensions: Range and Azimuth).
I need to form a stack of three dimensions and extract a single dimension vector to compute a covariance matrix (the red vectors in the picture).
How i do this efficiently and easy in Python?
You can make a 3D numpy array pretty easily and then just use the indexing to pull out the bits that you're interested in:
stackOfImages = np.array((image1, image2)) #iterate over these if many more
redData = stackOfImages[:, N-1, M-1]