Rotation of 3d array and point cloud not matching - python

I have the following problem: I have a 3d numpy array representing a stack of images. I also have a point cloud (n,3) of some relevant locations in the 3d array, which I have plotted in there. I want to rotate both by an angle around the x-axis. I do the following:
#for point cloud
rotation_matrix = scipy.spatial.transform.Rotation.from_euler('x', x_rot, degrees=True)
rotated_stack_points = rotation_matrix.apply(stack_points)
#for 3d stack
rotated_marked_xy = scipy.ndimage.interpolation.rotate(gfp_stack, angle=-x_rot, axes=(1, 2), reshape=True)
Then I plot the rotated point cloud in the rotated 3d array. The problem is that there is always a xy offset between the points plotted pre rotation and after rotation and can't figure out why. This offset changes with the rotation angle, it's not fixed. Any ideas why this happens?

If you rotate by the same angle with different centers of rotation one output will be translated compared to the other.
The rotation matrix obtained by from euler will perform a rotation centered at the origin.
import scipy.spatial.transform
rotation_matrix = scipy.spatial.transform.Rotation.from_euler('x', 30, degrees=True)
rotation_matrix.apply([0,0,0])
The ndimage.rotate method rotates around the center of the image, and the reshape=True will extend the borders so that the corners lie in the new box.
import scipy.ndimage
import numpy as np
import matplotlib.pyplot as plt
x = np.ones((3, 100, 100))
xr = scipy.ndimage.rotate(x, 30, axes=(1,2))
plt.imshow(xr[0])
xr.shape
(3, 137, 137)
You can change the center of rotation by applying translate(-c) rotate translate(+c) sequence

Related

Creating an image mask using polygon points coordinates

I have a grayscale image with size (1920,1080) that I''m trying to create a mask for. I used an external software to manually get the points of interest (polygon). There are now 27 coordinates points representing a polygon in the middle of the image.
I created a mask using the following:
import numpy as np
import matplotlib.pyplot as plt
from skimage.draw import polygon2mask
#image= grayscale with shape (1920,1080)
coordinates = ([1080.15, 400.122], [1011.45, 400.90], .......) #27 points
polygon = np.array(coordinates)
mask = polygon2mask(image.shape, polygon)
result = ma.masked_array(image, np.invert(mask))
plt.imshow(result)
the problem I'm facing is the output in a wrong place; it should be somehow centred because I took the coordinates from the center, but it was actually in the edge of the image (bottom):
Also, the size seem to be a bit smaller that expected. I'm not sure what is causing this problem, I must have done something wrong in my code.. Kindly help me identifying the problem.
You inverted x and y coordinates. polygon2mask coordinates are in y,x order.
Add
coordinates = [[y,x] for [x,y] in coordinates]
after defining coordinates, and you'll have probably what you expected.

fitting ellipse in images with poor contrast

I am working on image processing in Python, on the topic of underwater photogrammetry. My goal is to fit an ellipse to fidicual markers, and retrieve its a) center, b) axis and c) orientation.
My markers are
radial,
white on black background, and some have a
binary code:
A ML-model delivers a small image snippets for each marker in each image, containting only the center of the marker.
So far, I've implemented these approaches:
Using openCV:
a) Thresholding, which results in a binary image (cv2.threshold)
b) Find Contours (cv2.findContours)
c) fit Ellipse (v2.fitEllipse)
Using Scikit:
a) Detect edge (using Canny)
b) Apply hough transform
Star operator (work in progress)
a) Estimate ellipse center
b) Send 360rays in all directions
c) Build an array, comprising coordinates of the largest gradient on each ray
d) Calculate best-fit ellipse using least-square method
e) Use the new center to repeat process (possibly several iterations required)
I perform these methods for each color-channel seperately. So far, the results between channels differ within several pixels for the ellipse center.
Do you have any suggestions on what pre-processing methods I should use, prior detecting/fitting the ellipse?
Any thoughts on which of the above methods will lead to the most accurate results?
This is amazing! Thank you. I just started to read about moments (e.g. https://www.pythonpool.com/opencv-moments/) and inertia.
However, there is a challange applying your code to this example:
As you can see, the image was poorly cropped, and the inertia of the image is more in the image center than in the center of the expected ellipse.
My first attempt to fix this is to binarize the image first:
import cv2 as cv2
T = int(cv2.mean(image)[0])
ret,image = cv2.threshold(image,T,255,0)
Is that a reasonable approach? I fear, that the binarization will have an unwanted impact on the moments of inertia. Thank you for claryfying.
This code finds the center of mass of the image, and the main axis of symmetry by calculating the moments of inertia.
I tried many libraries that calculate moments of inertia of images, but they give strange results (like 4x4 matrix for what should be a 2x2 matrix of inertia.
Also, ndimage.measurements.center_of_mass() appears to return (Cy,Cx) (row, column)
So, I resorted to manually calculating the moments of inertia
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image as Pim
from io import BytesIO
import requests
photoURL = "https://i.stack.imgur.com/EcLYk.png"
response = requests.get(photoURL)
image = np.array(Pim.open(BytesIO(response.content)).convert('L')) # Convert to greyscale
plt.imshow(image)
if True: # calculate eigen vectors = main axis of inertia
# xCoord, yCoord are the column and row numbers in image
xCoord, yCoord = np.meshgrid(np.arange(image.shape[1]), np.arange(image.shape[0]))
# mass M is the total sum of Image
M = np.sum(image)
# Cx, Cy are the coordinates of the center of mass
#Cx = sum(xCoord * image) / sum(image)
Cx = np.einsum('ij,ij', xCoord, image)/M
Cy = np.einsum('ij,ij', yCoord, image)/M
# Ixx is the second order moment of image respect to the horizontal axis passing through the center of mass
# Ixx=sum(Image*y^2)
Ixx = np.einsum('ij,ij,ij', yCoord-Cy, yCoord-Cy, image)
# Iyy is the second order moment of image respect to the vertical axis passing through the center of mass
# Iyy=sum(Image*x^2)
Iyy = np.einsum('ij,ij,ij', xCoord-Cx, xCoord-Cx, image)
# Ixy is the second order moment of image respect to both axis passing through the center of mass
# Ixy=sum(Image*x*y)
Ixy = np.einsum('ij,ij,ij', xCoord-Cx, yCoord-Cy, image)
inertiaMatrix = np.array([[Ixx, Ixy],
[Ixy, Iyy]])
eigValues, eigVectors = np.linalg.eig(inertiaMatrix)
# Plot center of mass
plt.scatter(Cx, Cy, c='r')
# Plot eigenvectors from center to direction of eigenvectors
plt.quiver(Cx, Cy, eigVectors[0, 0], eigVectors[1, 0], color='r', scale=2)
plt.quiver(Cx, Cy, eigVectors[0, 1], eigVectors[1, 1], color='r', scale=2)
plt.show()
nothing = 0

Python: Rotate mxn array to an arbitrary angle

I have matrix filled with zeros and a rectangle filled by ones on a region of this matrix, like this
and I want to rotate the rectangle to an arbitrary angle (30° in this case) like this
import numpy as np
import matplotlib.pyplot as plt
n_x = 200
n_y = 200
data = np.zeros((n_x, n_y))
data[20:50, 20:40] = 1
plt.imshow(data)
plt.show()
How about using scipy?
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import rotate
n_x = 200
n_y = 200
data = np.zeros((n_x, n_y))
data[20:50, 20:40] = 1
angle = 30
data = rotate(data, angle)
plt.imshow(data)
plt.show()
Of course this is around the middle of the image. If you want to rotate around the center of the rectangle, I would suggest translating it to the middle of the image, rotate it and then translate it back.
From a mathematical point of view you could solve the problem by transforming
the cartesian coordinates of the ones into polar coordinates relative to the center of the rectangle.
r = sqrt(x²+y²)
phi = atan2(y,x)
(Note that x and y have to be relative to the center of rotation)
With polar coordinates it is no problem to rotate, since you just have to add the desired angle to phi and than transform back into cartesian coordinates.
x = r*cos(phi)
y = r*sin(phi)
(And again, the resulting coordinates would be with respect to the center of rotation, so you need to add them to the cartesian vector which points to the center)

Rotated image coordinates after scipy.ndimage.interpolation.rotate?

I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate
As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()

How do I make an elevation model from a 3d polygon?

I have a number of polygons in 3d from a geojson file, and I would like to make an elevation model. This means that I want a raster, where every pixel is the height of the polygon in this position.
I tried looking at gdal_rasterize, but the description says
As of now, only points and lines are drawn in 3D.
gdal_rasterize
I ended up using the scipy.interpolat-function called griddata. This uses a meshgrid to get the coordinates in the grid, and I had to tile it up because of memory restrictions of meshgrid.
import scipy.interpolate as il #for griddata
# meshgrid of coords in this tile
gridX, gridY = np.meshgrid(xi[c*tcols:(c+1)*tcols], yi[r*trows:(r+1)*trows][::-1])
## Creating the DEM in this tile
zi = il.griddata((coordsT[0], coordsT[1]), coordsT[2], (gridX, gridY),method='linear',fill_value = nodata) # fill_value to prevent NaN at polygon outline
The linear interpolation seems to do exactly what I want. See description at https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html

Categories

Resources