I am a bit new to python and want to get into numpy.
I try to solve the gaussian kernel function with 2 for-loops:
for n in range(0, 6):
for k in range(len(centers_Hex)):
expo_sum[n+1] += np.exp(-np.linalg.norm(z_approx-center_Matrix[n][k])**2/(2*sigma**2))
where center_Matrix includesa matrix of (x,y) coordinates for the center of the gaussian bell, z_approx is the data_point which i want to calculate and sigma is a variable.
So how can I simplify these two for loops? My main problem is the linalg.norm for the simplification.
Thank you!
If you can turn center_Matrix into a 3D array, with the 2-element tuples being the inner dimension (so the shape would be (n, k, 2)), you might be able to do the following:
diff = np.linalg.norm([center_Matrix[...,0] - z_approx[0], center_Matrix[...,1] - z_approx[1]], axis=0)
expo_sum = np.exp(-diff**2 / (2*sigma**2))
expo_sum = expo_sum.sum(axis=1)
This does shift the resulting expo_sum by one index, since you use expo_sum[n+1] = ..., but that is something you can adjust elsewhere in your code.
Related
I have been working on a task, where I implemented median cut for image quantization – representing the whole image by only limited set of pixels. I implemented the algorithm and now I am trying to implement the part, where I assign each pixel to a representant from the set found by median cut. So, I have variable 'color_space', which is 2d ndarray of shape (n,3), where n is the number of representatives. Then I have variable 'img', which is the original image of shape (rows, columns, 3).
Now I want to find the nearest pixel (bin) for each pixel from the image based on euclidean distance. I was able to come with this solution:
for row in range(img.shape[0]):
for column in range(img.shape[1]):
img[row][column] = color_space[np.linalg.norm(color_space - img[row][column], axis=1).argmin()]
What it does is, that for each pixel from the image, it computes the vector if distances from each of the bins and then it takes the closest one.
Problem is, that this solution is quite slow and I would like to vectorize it - instead of getting vector for each pixel, I would like to get a matrix, where for example first row would be the first vector of distances computed in my code etc...
This problem could be converted into a problem, where I want to do a matrix multiplication, but instead of getting dot product of two vectors, I would get their euclidean distance. Is there some good approach to such problems? Some general solution in numpy, if we want to do 'matrix multiplication' in numpy, but the function Rn x Rn -> R does not need to be dot product, but for example euclidean distance. Of course, for the multiplication, the original image should be resized to (row*columns, 3), but that is a detail.
I have been studying the documentation and searching internet, but didn't find any good approach.
Please note that I don't want others to solve my assignment, the solution I came up with is totally ok, I am just curious whether I could speed it up as I try to learn numpy properly.
Thanks for any advices!
Below is MWE for vectorizing your problem. See comments for explanation.
import numpy
# these are just random array declaration to work with.
image = numpy.random.rand(32, 32, 3)
color_space = numpy.random.rand(10,3)
# your code. I modified it to pick indexes
result = numpy.zeros((32,32))
for row in range(image.shape[0]):
for column in range(image.shape[1]):
result[row][column] = numpy.linalg.norm(color_space - image[row][column], axis=1).argmin()
result = result.astype(numpy.int)
# here we reshape for broadcasting correctly.
image = image.reshape(1,32,32,3)
color_space = color_space.reshape(10, 1,1,3)
# compute the norm on last axis, which is RGB values
result_norm = numpy.linalg.norm(image-color_space, axis=3)
# now compute the vectorized argmin
result_vectorized = result_norm.argmin(axis=0)
print(numpy.allclose(result, result_vectorized))
Eventually, you can get the correct solution by doing color_space[result]. You may have to remove the extra dimensions that you add in color space to get correct shapes in this final operation.
I think this approach might be a bit more numpy-ish/pythonic:
import numpy as np
from typing import *
from numpy import linalg as LA
# assume color_space is defined as a constant somewhere above and is of shape (n,3)
nearest_pixel_idxs: Callable[[np.ndarray], int] = lambda rgb: return LA.norm(color_space - rgb, axis=1).argmin()
img: np.ndarray = color_space[np.apply_along_axis(nearest_pixel_idxs, 1, img.reshape((-1, 3)))]
Why this solution might be more efficient:
It relies on the parallelizable apply_along_axis function nearest_pixel_idxs() rather than the nested for-loops. This is made possible by reshaping img and thereby removing the need for double indexing.
It avoids repeated writes into color_space by only indexing into it once at the very end.
Let me know if you would like me to go into greater depth on any of this - happy to help.
You could first broadcast to get all the combinations and then calculate each norm. You could then pick the smallest from there.
a = np.array([[1,2,3],
[2,3,4],
[3,4,5]])
b = np.array([[1,2,3],
[3,4,5]])
a = np.repeat(a.reshape(a.shape[0],1,3), b.shape[0], axis = 1)
b = np.repeat(b.reshape(1,b.shape[0],3), a.shape[0], axis = 0)
np.linalg.norm(a - b, axis = 2)
Each row of the result represents the distance of the row in a to each of the representatives in b
array([[0. , 3.46410162],
[1.73205081, 1.73205081],
[3.46410162, 0. ]])
You can then use argmin to get the final results.
IMO it is better to use (what #Umang Gupta proposed) numpy's automatic broadcasting than using repeat.
I am generating a series of Gaussian arrays given a x vector of length (1400), and arrays for the sigma, center, amplitude (amp), all with length (100). I thought the best way to speed this up would be to use numpy and list comprehension:
g = np.sum([(amp[i]*np.exp(-0.5*(x - (center[i]))**2/(sigma[i])**2)) for i in range(len(center))],axis=0)
Each row is a gaussian along a vector x, and then I sum the columns into a single array of length x.
But this doesn't seem to speed things up at all. I think there is a faster way to do this while avoiding the for loop but I can't quite figure out how.
You should use vectorized computation instead of comprehension so the loops are all performed at c speed.
In order to do so you have to reshape x to be a column vector. For example you could do x = x.reshape((1400,1)).
Then you can operate directly on the arrays, like this:
v=(amp*np.exp(-0.5*(x - (center))**2/(sigma)**2
Then you obtain an array of shape (1400,100) which you can sum up to a vector by np.sum(v, axe=1)
You should try to vectorize all the operations. IMHO the most efficient to first converts your input data to numpy arrays (if they were plain Python lists) and then let numpy process the computations:
np_amp = np.array(amp)
np_center = np.array(center)
np_sigma = np.array(sigma)
g = np.sum((np_amp*np.exp(-0.5*(x - (np_center))**2/(np_sigma)**2)),axis=0)
I am working with Trimesh, and trying to compute some statistics on the meshes. One of the possible statistics (and the one I am using to illustrate the question) is a histogram of the areas of 3 random vertices of the mesh. Currently I am doing the following, but I would like to know if there's any way to avoid using a loop.
def CalcArea(self, p):
return 0.5 * np.linalg.norm(np.cross(p[1]-p[0], p[2]-p[0]))
v_c = self.mesh.vertices.copy()
np.random.shuffle(v_c)
areas = [self.CalcArea(v_c[i:i+3]) for i in range(len(v_c[:-2]))]
The numpy documentation is your friend :-).
np.cross and np.linalg.norm work on arrays of vectors as well. And they support the powerful keyworded argument axis.
I'm assuming your v_c has the shape (N, 3), where N is your number of vertices. Let's assume it's a multiple of three for simplicity, then:
N = 30
v_c = np.random.random((N, 3))
v1 = v_c[N//3:2*N//3, :] - v_c[:N//3, :]
v2 = v_c[2*N//3:, :] - v_c[:N//3, :]
area = 0.5*np.linalg.norm(np.cross(v1, v2), axis=1)
Note that this involves the creation of temporary arrays so maybe keep an eye out for very large N.
I am trying to make a 2D 5850x5850 array from two 1D arrays by putting them into this equation for a 2D gausian.
psf = 1/(2*np.pi*sigma_x*sigma_y) * np.exp(-(x**2/(2*sigma_x**2) + y**2/(2*sigma_y**2)))
However it gives back a 1D array, waht am i doing wrong?
If I understand your question correctly:
All you need to do is to alter shape of your arrays.
E.g.
x.shape=(5850,1) # now it is column array
y.shape=(1,5850) # now it is row array
Then you can proceed as in your original post. The result will be 5850 by 5850 array. Each row will correspond to different x and each column will correspond to different y.
However I would change few things in your code to make it look like that:
psf = 1/(2*np.pi*sigma_x*sigma_y) * np.exp(-(x*x/(2*sigma_x*sigma_x) + y*y/(2*sigma_y*sigma_y)))
Squaring values is usually inefficient (unless your complier translates it to multiplication, but in Python there is no complier to rely on). Squaring is much slower than multiplication. When you take a value to the power your computer needs to be ready that it might be negative or that it is not an integer. There is no such overhead when you multiply values.
Try:
for i in xrange(0,1000000):
z=i**2
for i in xrange(0,1000000):
z=i*i
Formar ran 0.975s on my machine whereas later only 0.267s.
It doesn't understand that x and y are to mean that for every x, you must do this for each y. If you can't find a library to create 2d functions/guassians more conveniently, try:
z = np.empty((len(x), len(y))
for idx, yval in enumerate(y):
z[:,idx] = f(x, yval)
Where f(x, yval) if you 2d function but where you have y, use yval. There's got to be more support for 2d function creation somewhere, maybe try scipy 2d guassian functions in a search?
The proper expression to make a 2d Gaussian would be
x = np.arange(0, size, 1, float)
y = x[:,np.newaxis]
x0 = y0 = 0 # your center
np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / radius**2)
I have a code in which I need to handle some big numpy arrays. For example I have a 3D array A and I need to construct another 3d array B using the elements of A. However all the elements of B are independent of each other. Example:
for i in np.arange(Nx):
for j in np.arange(Ny):
for k in np.arange(Nz):
B[i][j][k] = A[i+1][j][k]*np.sqrt(A[i][j-1][k-1])
So it will speed up immensely if I can construct the B array parallely. What is the simplest way to do this in python?
I also have similar matrix operations like normalizing each row of a 2D array. Example
for i in np.arange(Nx):
f[i,:] = f[i,:]/np.linalg.norm(f[i,:])
This will also speed up if it runs parallely for each row. How can it be done?
You should look into Numpy's roll function. I think this is equivalent to your first block of code (though you need to decide what happens at the edges - roll "wraps around"):
B = np.roll(A,1,axis=0) * np.sqrt(np.roll(np.roll(A,-1,axis=1),-1,axis=2))
Another fairly horrible one-liner for your second case is:
f /= np.sqrt(np.sum(f**2, axis=1))[...,np.newaxis]
Explanation of this line:
We are first going to calculate the norm of each row. Let's
f = np.random.rand(5,6)
Square each element of f
f**2
Sum the squares along axis 1, which "flattens" out that axis.
np.sum(f**2, axis=1)
Take the square root of the sum of the squares.
np.sqrt(np.sum(f**2, axis=1))
We now have the norm of each row.
To divide each original row of f by this correctly we need to make use of the Numpy broadcasting rules to effectively add a dimension:
np.sqrt(np.sum(f**2, axis=1))[...,np.newaxis]
And finally we calculate our result
f /= np.sqrt(np.sum(f**2, axis=1))[...,np.newaxis]
If you are taking good care of the edges, the standard way of going about your first vectorization would be something like this:
B = np.zeros(A.shape)
B[:-1, 1:, 1:] = A[1:, 1:, 1:] * np.sqrt(A[:-1, :-1, :-1])
You would then need to fill B[-1, :, :], B[:, 0, :] and B[:, :, 0] with appropriate values.
Extending this to other indices should be pretty straightforward.
To perform parallel processing in numpy, you should look at mpi4py. It's an MPI binding for Python. It allows distributed processing.