Python: Calculate the Gradient of a 3D grid - python

I have a cube of particles which I've projected onto a 2D grid, Projecting the particles onto the grid by a clouds in cells and weighting them by a scalar.
I would then like the gradient of the scalar at every grid point. In 2D I am doing this using np.gradient and I get two arrays with the gradient in the x and y directions:
gradx, grady = np.gradient(grid)
Does anyone have any idea how I can generalize this to 3 Dimensions? The Clouds in Cells in 3D is fine but I am then left with a grid with the shape (700, 700, 700).
As far as I can see np.gradient can't deal with this?
Thanks,
Daniel

The Numpy documentation indicates that gradient works for any dimensions:
numpy.gradient(f, *varargs)
Return the gradient of an N-dimensional array.
The gradient is computed using central differences in the interior and
first differences at the boundaries. The returned gradient hence has
the same shape as the input array.
Parameters :
f: array_like. An N-dimensional array containing samples
of a scalar function.
*varargs: 0, 1, or N scalars specifying the sample distances in each direction, that is: dx, dy, dz, ... The default distance is 1.
Returns :
g: ndarray. N arrays of
the same shape as f giving the derivative of f with respect to each
dimension.
Seems like you should be able to extend your 2-dimensional code to 3D like you would expect.

Related

Transforming 2D image point to 3D world point where Z !=0

the below code transforms a detected 2D-image point to it's 3D location on a defined plane Grid in 3D-world.
This mean Z=0, and taking into account that the Extrinsics and Intrinsics are known, we can compute the corresponding 3D_point of the detected 2D-image point:
import cv2
import numpy as np
#load extrinsics & intrinsics
with np.load('parameters_cam1.npz') as X:
mtx, dist = [X[i] for i in ('mtx','dist','rvecs','tvecs')]
with np.load('extrincic.npz') as X:
rvecs1,tvecs1 = [X[i] for i in('rvecs1','tvecs1')]
#prepare rotation matrix
R_mtx, jac=cv2.Rodrigues(rvecs1)
#prepare projection matrix
Extrincic=cv2.hconcat([R_mtx,tvecs1])
Projection_mtx=mtx.dot(Extrincic)
#delete the third column since Z=0
Projection_mtx = np.delete(Projection_mtx, 2, 1)
#finding the inverse of the matrix
Inv_Projection = np.linalg.inv(Projection_mtx)
#detected image point (extracted from a que)
img_point=np.array([pts1_blue[0]])
#adding dimension 1 in order for the math to be correct (homogeneous coordinates)
img_point=np.vstack((img_point,np.array(1)))
#calculating the 3D point which located on the 3D plane
3D_point=Inv_Projection.dot(img_point)
#show results
print('3D_pt_method1\n',3D_point)
#output
3D_pt_method1
[[0.01881387]
[0.0259416 ]
[0.04150276]]
By normalizing the point (dividing by the third value) the result is
`X_World=0.45331611680765327 # 45.3 cm from defined world point cm which is correct
Y_world=0.6250572251098481 # 62.5 cm which is also correct
By evaluating the results, it turns out that they are correct.
I now that we can't retrieve the the Z coordinate of the 3D world point since depth information is lost going from 3d to 2d. The following equation also performs the inverse projection of the 2D point onto 3D world and can be found in all literature, and the result is an equation which represents a line on which the 3D_ world point must lie on
I put the equation 3.15 into code, however without setting Z=0, meaning to say with out deleting the third column of the projection matrix like i did in the previous method (Just as it's written) by doing the following the following:
#inverting the rotation matrix
INV_R=np.linalg.inv(R_mtx)
#inverting the camera matrix
INV_k=np.linalg.inv(mtx)
#multiplying the tow matrices
kinv_Rinv=INV_k.dot(INV_R)
#calcuating the 3D_point X which expressed in eq.3.15
3D_point=kinv_Rinv.dot(img_point)+tvecs1
#print the results
print('3D_pt_method2\n',3D_point)
and the result was
3D_pt_method2 #how should one understand these coordinates ?
[[-9.12505825]
[-5.57152147]
[40.12264881]]
My question is, How should i understand or interpret this result? as it doesn't make any sense compared to the previous method where Z=0. the 3D 3x1 resulted vector seems to give an intuition that it's values represents simply the 3D X, Y and Z of the detected image_point. However, this is not true if we compare X and Y with the previous method!!
So what is literally the difference between 3D_pt_method1 and 3D_pt_method2???
I hope i could express my self and really appreciate helping me understand the difference between the two implementations!
Note: the Grid that represents my defined World-plane and can be seen in the below image in which the distance between every two yellow points is 40 cm
Thanks in advance
You miss the key variable "w" in method2.
You can get help from referring to this article: https://blog.csdn.net/zhou4411781/article/details/103876478
This article is written in Chinese, but you can just try to get the point from those formula in that article if you cannot understand Chinese.
Simply speaking:
You said right "I know that we can't retrieve the the Z coordinate of the 3D world point since depth information is lost going from 3d to 2d. "
This also means: If you know the depth (the Z axis value in world coordination), you can get 3d ordinate by 2d ordinate and the depth. As well, if you know the X or Y axis value in world coordination, you can also get the result.

Best way to rotate a 3D grid (nxnxn) of values in Python with interpolation?

If I have a nxnxn grid of values, say 32x32x32, and I want to rotate this cube grid of values by some rotation angle in either the x, y, or z axes, and interpolate missing values, what would be the best way to go about doing this without using any existing algorithms from packages (such as Scipy)?
I'm familiar with applying a 3D rotation matrix to a 3D grid of points when it's represented as a [n, 3] matrix, but I'm not sure how to go about applying a rotation when the representation is given in its 3D form as nxnxn.
I found a prior Stack Overflow post about this topic, but it uses three for loops for its approach, which doesn't really scale in terms of speed. Is there a more vectorized approach that can accomplish a similar task?
Thanks in advance!
One way I could think of would look like this:
reshape nxnxn matrix to an array containing n-dimensional points
apply rotation on this array
reshape array back to nxnxn
Here is some code:
import numpy as np
#just a way to create some nxnxn matrix
n = 4
a = np.arange(n)
b = np.array([a]*n)
mat = np.array([b]*n)
#creating an array containg n-dimensional points
flat_mat = mat.reshape((int(mat.size/n),n))
#just a random matrix we will use as a rotation
rot = np.eye(n) + 2
#apply the rotation on each n-dimensional point
result = np.array([rot.dot(x) for x in flat_mat])
#return to original shape
result=result.reshape((n,n,n))
print(result)

python - Finding the vertices of the cuboid surrounding a coordinate in a cuboidal 3-d grid with non-regular spacings

I will have a 3-d grid of points (defined by Cartesian vectors). For any given coordinate within the grid, I wish to find the 8 grid points making the cuboid which surrounds the given coordinate. I also need the distances between the vertices of the cuboid and the given coordinate. I have found a way of doing this for a meshgrid with regular spacings, but not for irregular spacings. I do not yet have an example of the irregularly spaced grid data, I just know that the algorithm will have to deal with them eventually. My solution for the regularly spaced points is based off of this post, Finding index of nearest point in numpy arrays of x and y coordinates and is as follows:
import scipy as sp
import numpy as np
x, y, z = np.mgrid[0:5, 0:10, 0:20]
# Example 3-d grid of points.
b = np.dstack((x.ravel(), y.ravel(), z.ravel()))[0]
tree = sp.spatial.cKDTree(b)
example_coord = np.array([1.5, 3.5, 5.5])
d, i = tree.query((example_coord), 8)
# i being the indices of the closest grid points, d being their distance from the
# given coordinate, example_coord
b[i[0]], d[0]
# This gives one of the points of the surrounding cuboid and its distance from
# example_coord
I am looking to make this algorithm run as efficiently as possible as it will need to be run a lot. Thanks in advance for your help.

GDAL Affine Coefficients from Lat/Lon Meshgrids in Python

I am having some problems with Affine tranformation coefficients while creating a new GeoTIFF file. What I am doing is ETL on a scientific dataset that results in a 2D Ndarray along with a set of meshgrid Ndarrays that contain Lat and Lon. Both the meshgrids and the dataset arrays have the same dimensions of 645 x 980. From what I understand the GeoTIFF requires a list of Affine coefficients when created from Python GDAL via the SetGeoTransform() method. The list has the form of [xllcorner, xrotation, x_cellsize, yllcorner, yrotation, y_cellsize]. My approach to this is similar to what is outlined here: http://adventuresindevelopment.blogspot.com/2008/12/python-gdal-adding-geotiff-meta-data.html
At this point is where I run into problems. I calculate the xllcorner and the yllcorner using the min() method for the two meshgrid arrays for lat & lon respectively, and I manually calculate the x and y cellsize by applying the formula [max-min]/dimension size with the x dimension size being the x axis size for the lons meshgrid and the y dimension size being the y axis size for the lats meshgrid. When I apply this method and try to write out the array band via GetRasterBand().WriteArray() I get this error message:
Traceback (most recent call last):
...
raise ValueError("array larger than output file, or offset off edge")
ValueError: array larger than output file, or offset off edge
Therefore I assume that I have composed my affine coefficients incorrectly but given the data this makes no sense to me. I even made sure that the Spatial Reference System was set to WGS:84 before attempting the affine coefficient creation. So my question is how to properly create the Affine coefficients with lat/lon meshgrids and a data array that share common dimensions? I think my cell size calculation can't simply be lat/lon differences; but I am not sure.
This error is typically shown when the expected array shape does not match. For instance, see what shape the expected shape is with:
band = src.GetRasterBand(1)
arr = band.ReadAsArray()
print(arr.shape) # (656L, 515L)
This will need to be the shape of the numpy array to be written:
assert other_array.shape == arr.shape
band.WriteArray(other_array)
And to raise the same ValueError, change the shape so it is longer in one dimension, e.g.:
band.WriteArray(other_array.T)
As for affine transformations, this is probably not raising any errors, as it is often just stored as data. GIS rasters typically register the world coordinate in the upper-left corner, and use a -dy value to count rows downwards. However, using a lower-left corner with +dy is usually fine by most software. It will just be upside down when comparing the array as a printed matrix versus mapped raster.

Rotation of a 2D array over an angle using rotation matrix

What I want to do is to rotate a 2D numpy array over a given angle. The approach I'm taking is using a rotation matrix. The rotation matrix I defined as:
angle = 65.
theta = (angle/180.) * numpy.pi
rotMatrix = numpy.array([[numpy.cos(theta), -numpy.sin(theta)],
[numpy.sin(theta), numpy.cos(theta)]])
The matrix I want to rotate is shaped (1002,1004). However, just for testing purposes I created a 2D array with shape (7,6)
c = numpy.array([[0,0,6,0,6,0], [0,0,0,8,7,0], [0,0,0,0,5,0], [0,0,0,3,4,0], [0,0,2,0,1,0], [0,8,0,0,9,0], [0,0,0,0,15,0]])
Now, when I apply the rotation matrix on my 2D array I get the following error:
c = numpy.dot(rotMatrix, c)
print c
c = numpy.dot(rotMatrix, c)
ValueError: matrices are not aligned
Exception in thread Thread-1 (most likely raised during interpreter shutdown):
What am I doing wrong?
You seem to be looking for scipy.ndimage.interpolation.rotate, or similar. If you specifically want 90, 180, or 270 degree rotations, which do not require interpolation, then numpy.rot90 is better.
Matrix dimensions will need to be compatible in order to obtain a matrix product. You are trying to multiply a 7x6 matrix with a 2x2 matrix. This is not mathematically coherent. It only really makes sense to apply a 2D rotation to a 2D vector to obtain the transformed coordinates.
The result of a matrix product is defined only when the left hand matrix has column count equal to right hand matrix row count.
You may want to look at skimage.transform. This module has several useful functions including rotation. No sense in rewriting something that is already done.
You can not rotate any ndim vector using 2D matrix.
I did not find an in built function in numpy. I was hoping that this is a very common functionality and should be there. Let me know if you find it.
Mean while I have create function of my own.
def rotate(vector, theta, rotation_around=None) -> np.ndarray:
"""
reference: https://en.wikipedia.org/wiki/Rotation_matrix#In_two_dimensions
:param vector: list of length 2 OR
list of list where inner list has size 2 OR
1D numpy array of length 2 OR
2D numpy array of size (number of points, 2)
:param theta: rotation angle in degree (+ve value of anti-clockwise rotation)
:param rotation_around: "vector" will be rotated around this point,
otherwise [0, 0] will be considered as rotation axis
:return: rotated "vector" about "theta" degree around rotation
axis "rotation_around" numpy array
"""
vector = np.array(vector)
if vector.ndim == 1:
vector = vector[np.newaxis, :]
if rotation_around is not None:
vector = vector - rotation_around
vector = vector.T
theta = np.radians(theta)
rotation_matrix = np.array([
[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)]
])
output: np.ndarray = (rotation_matrix # vector).T
if rotation_around is not None:
output = output + rotation_around
return output.squeeze()
if __name__ == '__main__':
angle = 30
print(rotate([1, 0], 30)) # passing one point
print(rotate([[1, 0], [0, 1]], 30)) # passing multiple points

Categories

Resources