Rotated image coordinates after scipy.ndimage.interpolation.rotate? - python

I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate

As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()

Related

Creating an image mask using polygon points coordinates

I have a grayscale image with size (1920,1080) that I''m trying to create a mask for. I used an external software to manually get the points of interest (polygon). There are now 27 coordinates points representing a polygon in the middle of the image.
I created a mask using the following:
import numpy as np
import matplotlib.pyplot as plt
from skimage.draw import polygon2mask
#image= grayscale with shape (1920,1080)
coordinates = ([1080.15, 400.122], [1011.45, 400.90], .......) #27 points
polygon = np.array(coordinates)
mask = polygon2mask(image.shape, polygon)
result = ma.masked_array(image, np.invert(mask))
plt.imshow(result)
the problem I'm facing is the output in a wrong place; it should be somehow centred because I took the coordinates from the center, but it was actually in the edge of the image (bottom):
Also, the size seem to be a bit smaller that expected. I'm not sure what is causing this problem, I must have done something wrong in my code.. Kindly help me identifying the problem.
You inverted x and y coordinates. polygon2mask coordinates are in y,x order.
Add
coordinates = [[y,x] for [x,y] in coordinates]
after defining coordinates, and you'll have probably what you expected.

Rotation of 3d array and point cloud not matching

I have the following problem: I have a 3d numpy array representing a stack of images. I also have a point cloud (n,3) of some relevant locations in the 3d array, which I have plotted in there. I want to rotate both by an angle around the x-axis. I do the following:
#for point cloud
rotation_matrix = scipy.spatial.transform.Rotation.from_euler('x', x_rot, degrees=True)
rotated_stack_points = rotation_matrix.apply(stack_points)
#for 3d stack
rotated_marked_xy = scipy.ndimage.interpolation.rotate(gfp_stack, angle=-x_rot, axes=(1, 2), reshape=True)
Then I plot the rotated point cloud in the rotated 3d array. The problem is that there is always a xy offset between the points plotted pre rotation and after rotation and can't figure out why. This offset changes with the rotation angle, it's not fixed. Any ideas why this happens?
If you rotate by the same angle with different centers of rotation one output will be translated compared to the other.
The rotation matrix obtained by from euler will perform a rotation centered at the origin.
import scipy.spatial.transform
rotation_matrix = scipy.spatial.transform.Rotation.from_euler('x', 30, degrees=True)
rotation_matrix.apply([0,0,0])
The ndimage.rotate method rotates around the center of the image, and the reshape=True will extend the borders so that the corners lie in the new box.
import scipy.ndimage
import numpy as np
import matplotlib.pyplot as plt
x = np.ones((3, 100, 100))
xr = scipy.ndimage.rotate(x, 30, axes=(1,2))
plt.imshow(xr[0])
xr.shape
(3, 137, 137)
You can change the center of rotation by applying translate(-c) rotate translate(+c) sequence

How create a sphere image with smooth boudaries

Trying to create a ghost 3d image with a known volume to create a calibrated gold standard : a sphere.
I've tried this code but it doesn't provide me what i'm really want.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import medpy.io as mdpy
from scipy import signal
def create_bin_sphere(arr_size, center, r):
coords = np.ogrid[:arr_size[0], :arr_size[1], :arr_size[2]]
distance = np.sqrt((coords[0] - center[0])**2 + (coords[1]-center[1])**2 + (coords[2]-center[2])**2)
return 10*(distance <= r)
arr_size = (100,100,100)
sphere_center = (50,50,50)
r=30
sphere = create_bin_sphere(arr_size,sphere_center, r)
kernel = np.array([[[0,1,2,3,4,5,6,7,8,9,10,9,8,7,6,5,4,3,2,1,0]]])
sphere_smooth = scipy.signal.oaconvolve(sphere, kernel, mode="same")
from medpy.io import load, save
save(sphere_smooth ,r'x\x\x\x\x\sphere_smooth.mhd')
Does not work as i wish :image in itk snap in x plan
I would rather not change the sphere shape (the "smoothed" become ovoid), have the blurr effect on the whole circumference and reducing its thickness, and finally invert the density (white outside and grey inside the sphere).
If you can help me i will appreciate...

determining the average colour of a given circular sample of an image?

What I am trying to achieve is similar to photoshop/gimp's eyedropper tool: take a round sample of a given area in an image and return the average colour of that circular sample.
The simplest method I have found is to take a 'regular' square sample, mask it as a circle, then reduce it to 1 pixel, but this is very CPU-demanding (especially when repeated millions of times).
A more mathematically complex method is to take a square area and average only the pixels that fall within a circular area within that sample, but determining what pixel is or isn't within that circle, repeated, is CPU-demanding as well.
Is there a more succinct, less-CPU-demanding means to achieve this?
Here's a little example of skimage.draw.circle() which doesn't actually draw a circle but gives you the coordinates of points within a circle which you can use to index Numpy arrays with.
#!/usr/bin/env python3
import numpy as np
from skimage.io import imsave
from skimage.draw import circle
# Make rectangular canvas of mid-grey
w, h = 200, 100
img = np.full((h, w), 128, dtype=np.uint8)
# Get coordinates of points within a central circle
Ycoords, Xcoords = circle(h//2, w//2, 45)
# Make all points in circle=200, i.e. fill circle with 200
img[Ycoords, Xcoords] = 200
# Get mean of points in circle
print(img[Ycoords, Xcoords].mean()) # prints 200.0
# DEBUG: Save image for checking
imsave('result.png',img)
I'm sure that there's a more succinct way to go about it, but:
import math
import numpy as np
import imageio as ioimg # as scipy's i/o function is now depreciated
from skimage.draw import circle
import matplotlib.pyplot as plt
# base sample dimensions (rest below calculated on this).
# Must be an odd number.
wh = 49
# tmp - this placement will be programmed later
dp = 500
#load work image (from same work directory)
img = ioimg.imread('830.jpg')
# convert to numpy array (droppying the alpha while we're at it)
np_img = np.array(img)[:,:,:3]
# take sample of resulting array
sample = np_img[dp:wh+dp, dp:wh+dp]
#==============
# set up numpy circle mask
## this mask will be multiplied against each RGB layer in extracted sample area
# set up basic square array
sample_mask = np.zeros((wh, wh), dtype=np.uint8)
# set up circle centre coords and radius values
xy, r = math.floor(wh/2), math.ceil(wh/2)
# use these values to populate circle area with ones
rr, cc = circle(xy, xy, r)
sample_mask[rr, cc] = 1
# add axis to make array multiplication possible (do I have to do this)
sample_mask = sample_mask[:, :, np.newaxis]
result = sample * sample_mask
# count number of nonzero values (this will be our median divisor)
nz = np.count_nonzero(sample_mask)
sample_color = []
for c in range(result.shape[2]):
sample_color.append(int(round(np.sum(result[:,:,c])/nz)))
print(sample_color) # will return array like [225, 205, 170]
plt.imshow(result, interpolation='nearest')
plt.show()
Perhaps asking this question here wasn't necessary (it has been a while since I've python-ed, and was hoping that some new library had been developed for this since), but I hope this can be a reference for others who have the same goal.
This operation will be performed for every pixel in the image (sometimes millions of times) for thousands of images (scanned pages), so therein are my performance issue worries, but thanks to numpy, this code is pretty quick.

Python: Rotate mxn array to an arbitrary angle

I have matrix filled with zeros and a rectangle filled by ones on a region of this matrix, like this
and I want to rotate the rectangle to an arbitrary angle (30° in this case) like this
import numpy as np
import matplotlib.pyplot as plt
n_x = 200
n_y = 200
data = np.zeros((n_x, n_y))
data[20:50, 20:40] = 1
plt.imshow(data)
plt.show()
How about using scipy?
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import rotate
n_x = 200
n_y = 200
data = np.zeros((n_x, n_y))
data[20:50, 20:40] = 1
angle = 30
data = rotate(data, angle)
plt.imshow(data)
plt.show()
Of course this is around the middle of the image. If you want to rotate around the center of the rectangle, I would suggest translating it to the middle of the image, rotate it and then translate it back.
From a mathematical point of view you could solve the problem by transforming
the cartesian coordinates of the ones into polar coordinates relative to the center of the rectangle.
r = sqrt(x²+y²)
phi = atan2(y,x)
(Note that x and y have to be relative to the center of rotation)
With polar coordinates it is no problem to rotate, since you just have to add the desired angle to phi and than transform back into cartesian coordinates.
x = r*cos(phi)
y = r*sin(phi)
(And again, the resulting coordinates would be with respect to the center of rotation, so you need to add them to the cartesian vector which points to the center)

Categories

Resources