My problem is the following:
I have two points in an image, I get the angle between these two points and rotate the image by this angle. I need to get the new position of this points in the image, but when I try to rotate those points using a rotation matrix with the same angle the points do not concur, what's wrong in the following code?
def rotMat(angle):
return asarray([[cos(angle), -sin(angle)],[sin(angle),cos(angle)]])
for i in batch2:
figure(1)
filename = "../imagens/" + i[-1]
outputFile = "./output/" + i[-1]
x1 = float(i[22]) # x coordinate of first point
y1 = float(i[23]) # y coordinate of first point
x2 = float(i[34]) # x coordinate of second point
y2 = float(i[35]) # y coordinate of second point
# angle of rotation
angle = arctan((y1-y2)/(x1-x2))
im = imread(filename)
im = ndimage.rotate(im, angle*180/pi, reshape=False)
imshow(im)
p1 = asarray([x1,y1])
p2 = asarray([x2,y2])
# Rotating the points
# [512,680] is the center of the image
p1n = (p1-[512,680]).dot(rotMat(angle)) + [512,680]
p2n = (p2-[512,680]).dot(rotMat(angle)) + [512,680]
print p1n, p2n
plot(p1n[0],p1n[1],'d')
plot(p2n[0],p2n[1],'d')
savefig(outputFile)
clf()
I don't understand 100 % what you are doing. But, did you consider that the y-axis in an image runs from 0 at the top to positive values for lower points. Therefore, the direction is opposite compared to the usual mathmetical definition. You defined rotMat in the usual way, but you have to adopt it to the changed y-axis in the image definition that runs in the oposite direction.
Related
I'm using python and I have some images on my canvas that are rotated by different angles.
What I want to do is to get their coordinates while I know their previous position, axis and the angle they were rotated by.
It would also be enough if I can find coordinates of just one point after rotation
To get a coordinate after it has been rotated, I would use numpy and a rotation matrix to derive its position:
import numpy as np
# define parameters
pixel_column = 10
pixel_row = 20
center_column = 50
center_row = 50
# angle goes counter-clockwise!
rotation_angle_deg = 90
def get_rotation_matrix(angle_deg: float):
theta = np.radians(angle_deg)
cos_theta, sin_theta = np.cos(theta), np.sin(theta)
rotation = np.array(((cos_theta, -sin_theta), (sin_theta, cos_theta)))
return rotation
def rotate_coordinate(coordinate: np.array, angle_deg: float) -> np.array:
rotation = get_rotation_matrix(angle_deg)
rotated = rotation.dot(coordinate)
return rotated
def get_new_coordinate(original_coordinate: np.array, center_of_rotation: np.array, angle_deg: float):
delta = original_coordinate - center_of_rotation
delta_rotated = rotate_coordinate(delta, angle_deg)
new_coordinate = center_of_rotation + delta_rotated
return new_coordinate
# calculate new rotated coordinate
rotated_coordinate = get_new_coordinate(
np.array((pixel_row, pixel_column)),
np.array((center_row, center_column)),
rotation_angle_deg
)
new_pixel_row, new_pixel_column = [int(val) for val in rotated_coordinate]
# show as text
print(f"""
After rotating the image at an angle of {rotation_angle_deg}° around ({center_row}, {center_column}),
the pixel located previously at ({pixel_row}, {pixel_column}) is now at ({new_pixel_row}, {new_pixel_column}).
""")
I have an algorithm question. I am currently working on a script that generates images of an object from various angles inside of Unreal engine and pairs these images with the coordinates of the object. The way it works is that I have the object at the origin, and I generate random spherical coordinates to place my camera at. I then rotate my camera to face the object and do an extra rotation so that the object can lie anywhere in my camera's FOV. I now want to consider my camera as the origin and find the spherical coordinates of the object relative to the graph.
Currently, I am trying to derive the coordinates as in the code below. I start by noting that the radial distance between the object and the camera is the same regardless of which one is the origin. Then, I use the fact that the angles between my camera and my object are determined entirely by the extra rotation at the end of my camera placement. Finally, I try to find a rotation that will orient the object the same way as in the image based on the angular coordinates of the camera (This is done because I want to encode information about points on the object besides the center. For example, I am currently using a 1 meter cube as a placeholder object, and I want to keep track of the coordinates of the corners. I chose to use rotations because I can use them to make a rotation matrix and use it to convert my coordinates). Below is the code I use to do this (the AirSim library is used here, but all you need to know is airsim.Pose() takes in a Euclidean position coordinate and a Quaternion rotation as arguments to position my camera).
PRECISION_ANGLE = 4 # Fractions of a degree used in generating random pitch, roll, and yaw values
PRECISION_METER = 100 # Fractions of a meter used in generating random distance values
RADIUS_MAX = 20 # Maximum distance from the obstacle to be expected
#TODO: Replace minimum distace with a test for detecting if the camera is inside the obstacle
RADIUS_MIN = 3 # Minimum distance from the obstacle to be expected. Set this value large enough so that the camera will not spawn inside the object
# Camera details should match settings.json
IMAGE_HEIGHT = 144
IMAGE_WIDTH = 256
FOV = 90
# TODO: Vertical FOV rounds down for generating random integers. Some pictures will not be created
VERT_FOV = FOV * IMAGE_HEIGHT // IMAGE_WIDTH
def polarToCartesian(r, theta, phi):
return [
r * math.sin(theta) * math.cos(phi),
r * math.sin(theta) * math.sin(phi),
r * math.cos(theta)]
while 1:
# generate a random position for our camera
r = random.randint(RADIUS_MIN * PRECISION_METER, RADIUS_MAX * PRECISION_METER) / PRECISION_METER
phi = random.randint(0, 360 * PRECISION_ANGLE) / PRECISION_ANGLE
theta = random.randint(0, 180 * PRECISION_ANGLE) / PRECISION_ANGLE
# Convert polar coordinates to cartesian for AirSim
pos = polarToCartesian(r, math.radians(theta), math.radians(phi))
# Generate a random offset for the camera angle
pitch = random.randint(0, VERT_FOV * PRECISION_ANGLE) / PRECISION_ANGLE - VERT_FOV / 2
# TODO: Rotating the drone causes the obstacle to be removed from the image because the camera is not square
#roll = random.randint(0, 360 * PRECISION_ANGLE) / PRECISION_ANGLE
roll = 0
yaw = random.randint(0, FOV * PRECISION_ANGLE) / PRECISION_ANGLE - FOV/2
# Calculate coordinates of the center of the obstacle relative to the drone's new position and orientation
obs_r = r
obs_phi = yaw
obs_theta = 90 - pitch
# Convert polar coordinates to cartesian for AirSim
obs_pos = polarToCartesian(obs_r, math.radians(obs_theta), math.radians(obs_phi))
# Record rotational transformation on obstacle for calculating coordinates of key locations relative to the center
obs_phi_offset = -phi
obs_theta_offset = 270 - theta
# Move the camera to our calculated position
camera_pose = airsim.Pose(airsim.Vector3r(pos[0], pos[1], pos[2]), airsim.to_quaternion(math.radians(90 - theta + pitch), math.radians(roll), math.radians(phi + 180 + yaw))) #radians
Is this algorithm implemented correctly? What other ways could I find the coordinates of my object? Should I be doing something in Unreal Engine to get my coordinates instead of doing this algorithmically (though it needs to be fast)?
A translation of the origin by Vector3(i,j,k) is simply the translation of the original output.
camera_pose = airsim.Pose(airsim.Vector3r(pos[0] + i, pos[1] + j, pos[2] + k), airsim.to_quaternion(math.radians(90 - theta + pitch), math.radians(roll), math.radians(phi + 180 + yaw))) #radians
I need to undistort the pixel coordinates of an image -- and need the corrected coordinates returned. I do not want an undistorted image returned-- just the corrected coordinates of the pixels. The camera is calibrated, and I have the camera intrinsic parameters, and the distortion matrix. I am using OpenCV in python 3
I have read up as much of the theory as I can find and questions here. Key info is:
https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
This pretty clearly describes the radial distortion and tangential distortion that needs to be considered.
radial:
x_{corrected} = x( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6)
y_{corrected} = y( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6)
Tangential:
x_{corrected} = x + [ 2p_1xy + p_2(r^2+2x^2)]
y_{corrected} = y + [ p_1(r^2+ 2y^2)+ 2p_2xy]
I suspect that I can't simply apply these corrections sequentially. Perhaps there is a function to do what I want to do directly, anyway -- and I'd love to hear about that.
I can't simply use the normal undistort procedure on the image, as I am attempting to apply an IR camera's distortion correction to the depth data from the same camera. If you undistort a depth image like this -- you split pixels across coordinates and the answer makes no sense. Hopefully I am on the right track with this.
The code so far:
import numpy as np
import cv2
imgIR = cv2.imread("/20190529-150017-305-1235-depth.png",0)
#you could try this on any image...
h, w = imgIR.shape[:2]
X = np.array([i for i in range(0,w)]*(h))
X = X.reshape(h, w)
Y = np.array([[i]*(w) for i in range(0,h)])
fx = 483.0 #x focal length
fy = 490.2
CentreX = 361.4 #optical centre of the image - x axis
CentreY = 275.6
#Relative to the optical centre, it is possible to determine the `#coordinates of each pixel in the image`
#then do the above operation without loops using a scalar subtraction
Xref = X - CentreX
Yref = Y - CentreY
#"scaling factor" refers to the relation between depth units and meters;
scalingFactor = 18.0/36.0 # 18pixels / 36 mm;
# I'm not sure what should be yet -- whether [pixels at the shelf]/mm
#or mm/[pixels at the shelf]
Z = imgIR / scalingFactor
#using numpy
Xcoord = np.multiply(Xref,Z/fx)
Ycoord = np.multiply(Yref,Z/fy)
#how to correct these coords for the radial and tangential distortion?
#parameters as returned for the distortion matrix using
cv2.calibrateCamera
dstvec = array([[-0.1225, -0.0159, 0.001616, -0.0018924,-0.00120696]])
What I am looking for is a new matrix of undistorted (radial and tangential distortion removed) X coordinates and a matrix of undistored Y coordinates -- with each matrix element representing one of the original pixels.
Thanks for your help!
I think you are looking for OpenCV's undistortPoints (https://amroamroamro.github.io/mexopencv/matlab/cv.undistortPoints.html).
px_distorted = np.zeros((1, 1, 2), dtype=np.float32)
px_distorted[0][0][0] = x_coordinate
px_distorted[0][0][1] = y_coordinate
px_undistorted = cv2.undistortPoints(px_distorted, intrinsics_mat, dist_coefficients)
I have a boat moving along a transect, looking for animals. Someone is stood on the top of the boat, facing forward, and is logging the distance from the boat and the bearing from the front of the boat when an animal is seen. I have this information as well as the xy coordinates of the boat at the point at which the animal was seen. I need to get the xy coordinates of the animal itself based on this information.
I don't have the original compass bearing of the boat, which makes this tricky; but what I do have is the next GPS (xy) coordinate of the boat, from which I can calculate a starting angle. From this, it should be possible to add or subtract the bearing at which the animal was seen to give a normalised angle which can be used to find the xy coordinates of the animal using trigonometry. Unfortunately my maths skills aren't quite up to the job.
I have several hundred points so I need to put this into a Python script to go through all the points.
In summary, the dataset has:
Original X, Original Y, End(next) X, End(next) Y, Bearing, Distance
EDIT: Sorry, I was in a rush and didn't explain this very well.
I see there being 3 stages to this problem.
Finding the original bearing of the transect
Finding the bearing of the point relative to the transect
Finding the new coordinates of the point based on this normalised angle and distance from the boat at the start xy
The Python code I had originally is below, although it's not much use - the figures given are examples.
distFromBoat = 100
bearing = 45
lengthOpposite = origX-nextX
lengthAdjacent = origY - nextY
virtX = origX #virtual X
virtY = origY-lengthOpposite #virtual Y
angle = math.degrees(math.asin(math.radians((lengthOpposite/transectLen))))
newangle = angle + bearing
newLenAdj = math.cos(newangle)*distFromBoat
newLenOpp = math.sqrt(math.pow(distFromBoat,2) + math.pow(newLenAdj,2) - 2*(distFromBoat*newLenAdj)*(math.cos(newangle)))
newX = virtX-newLenOpp
newY = origY-newLenAdj
print str(newX) +"---"+str(newY)
Thanks for any help in advance!
There is a little problem with Matt's function, so I used atan2 to get you the angle that the boat is going.
Edit: That was more complicated than I expected. In the end you need to subtract 90 and take the inverse to go from the geo-referenced angles to the trig angles.
(There is also an angles library (and probably other geography ones) that have this built in.
Now this takes the origX and origY, finds the trig angle and converts it to a heading, adds the bearing to the angle determined for the transect. Then it does trig on the distance, but using the angle converted back to trig degrees -(X-90). It is kind of warped, because we are used to thinking of 0 degrees as north/up, but in trig it is "to the right", and trig goes counter clockwise vs clockwise for navigation.
import math
origX = 0.0
origY = 0.0
nextX = 0.0
nextY = -1.0
distance = 100.0
bearing = 45
def angle(origX,origY,nextX,nextY):
opp = float(nextY - origY)
adj = float(nextX - origX)
return(math.degrees(math.atan2(adj,opp)))
# atan2 seems to even work correctly (return zero) when origin equals next
transectAngle = angle(origX,origY,nextX,nextY) # assuming the function has been defined
print "bearing plus trans", transectAngle + bearing
trigAngle = -(transectAngle + bearing -90)
print "trig equiv angle", trigAngle
newX = origX + distance * math.cos(math.radians(trigAngle))
newY = origY + distance * math.sin(math.radians(trigAngle))
print "position",newX,newY
Output:
-70.7106781187 -70.7106781187
Here is a function to print out a bunch of test cases (uses global vars so should be folded into the code above)
def testcase():
bearinglist = [-45,45,135,-135]
dist = 10
for bearing in bearinglist:
print "----transect assuming relative bearing of {}------".format(bearing)
print "{:>6} {:>6} {:>6} {:>6} {:>6} {:>6} {:>6} {:>6}".format("x","y","tran","head","trigT","trigH","newX","newY")
for x in [0,.5,-.5]:
for y in [0,.5,1,-.5]:
# print "A:", x,y,angle(origX,origY,x,y)
tA = newangle(origX,origY,x,y)
trigA = -(tA-90)
heading = tA + bearing
trigHead = -(heading-90)
Xnew = distance * math.cos(math.radians(trigHead))
Ynew = distance * math.sin(math.radians(trigHead))
print "{:>6.1f} {:>6.1f} {:>6.1f} {:>6.1f} {:>6.1f} {:>6.1f} {:>6.1f} {:>6.1f}".format(x,y,tA,heading,trigA,trigHead,Xnew,Ynew)
From what I understand, this is your problem:
You have 2 points, start and next that you are walking between
You want to find the coordinates of a third point, New that should be some distance and bearing from start, given that you are already facing from start to next.
My solution is this:
Create a normalized vector from start to next
Rotate your normalized vector by given bearing
Multiply the normalized, rotated vector by your distance, and then add it to start
treating start as a vector, the result of the addition is your new point
Because a counter-clockwise rotation ("left" from the current point) is considered positive, you need to invert bearing so that port is associated with negative, and starboard with positive.
Code
import math
origX = 95485
origY = 729380
nextX = 95241
nextY = 729215
distance = 2000.0
bearing = 45
origVec = origX, origY
nextVec = nextX, nextY
#Euclidean distance between vectors (L2 norm)
dist = math.sqrt((nextVec[0] - origVec[0])**2 + (nextVec[1] - origVec[1])**2)
#Get a normalized difference vector
diffVec = (nextVec[0] - origVec[0])/dist, (nextVec[1] - origVec[1])/dist
#rotate our vector by bearing to get a vector from orig towards new point
#also, multiply by distance to get new value
#invert bearing, because +45 in math is counter-clockwise (left), not starboard
angle = math.radians(-bearing)
newVec = origVec[0]+(diffVec[0]*math.cos(angle) - diffVec[1]*math.sin(angle))*distance, \
origVec[1]+(diffVec[0]*math.sin(angle) + diffVec[1]*math.cos(angle))*distance
print newVec
Output:
(93521.29597031244, 729759.2973553676)
It's (probably) possible there's a more elegant solution than this...assuming I understood your issue correctly. But, this will give you the bearing from your original location:
(inputs hard-coded as an example)
import math
origX = 0.0
origY = 0.0
nextX = 1.0
nextY = 0.0
Dist = ((nextX - origX)**2 + (nextY - origY)**2)**0.5
if origX == nextX and origY == nextY:
angle = 0
if origX == nextX and nextY < origY:
angle = 180
if nextY < origY and origX > nextX:
angle = math.degrees(math.asin((nextX -origX)/Dist)) - 90
if nextX > origX and nextY < origY:
angle = math.degrees(math.asin((nextX -origX)/Dist)) + 90
else:
angle = math.degrees(math.asin((nextX -origX)/Dist))
print angle
I am a beginner in Python and I have to work on a project using Numpy.
I need to generate some points (e.g. one million) on one part of the surface of a cylinder. These points should be regularly distributed on a subregion of the surface defined by a given angle. How could I go about doing this?
My input parameters are:
position of the center of cylinder (e.g. [0,0,0] )
the orientation of cylinder
length of cylinder
radius of cylinder
angle (this defines the part of cylinder which the points should be distributed on it.) for alpha = 360, the whole surface
delta_l is the distance between each two points in the length direction
delta_alpha is the distance between each two points in the alpha (rotation) direction
My output parameters :
an array containing the coordinates of all points
Could anyone help me, or give me a hint about how to do this?
Many thanks
This is taken from a previous project of mine:
def make_cylinder(radius, length, nlength, alpha, nalpha, center, orientation):
#Create the length array
I = np.linspace(0, length, nlength)
#Create alpha array avoid duplication of endpoints
#Conditional should be changed to meet your requirements
if int(alpha) == 360:
A = np.linspace(0, alpha, num=nalpha, endpoint=False)/180*np.pi
else:
A = np.linspace(0, alpha, num=nalpha)/180*np.pi
#Calculate X and Y
X = radius * np.cos(A)
Y = radius * np.sin(A)
#Tile/repeat indices so all unique pairs are present
pz = np.tile(I, nalpha)
px = np.repeat(X, nlength)
py = np.repeat(Y, nlength)
points = np.vstack(( pz, px, py )).T
#Shift to center
shift = np.array(center) - np.mean(points, axis=0)
points += shift
#Orient tube to new vector
#Grabbed from an old unutbu answer
def rotation_matrix(axis,theta):
a = np.cos(theta/2)
b,c,d = -axis*np.sin(theta/2)
return np.array([[a*a+b*b-c*c-d*d, 2*(b*c-a*d), 2*(b*d+a*c)],
[2*(b*c+a*d), a*a+c*c-b*b-d*d, 2*(c*d-a*b)],
[2*(b*d-a*c), 2*(c*d+a*b), a*a+d*d-b*b-c*c]])
ovec = orientation / np.linalg.norm(orientation)
cylvec = np.array([1,0,0])
if np.allclose(cylvec, ovec):
return points
#Get orthogonal axis and rotation
oaxis = np.cross(ovec, cylvec)
rot = np.arccos(np.dot(ovec, cylvec))
R = rotation_matrix(oaxis, rot)
return points.dot(R)
Plotted points for:
points = make_cylinder(3, 5, 5, 360, 10, [0,2,0], [1,0,0])
The rotation part is quick and dirty- you should likely double check it. Euler-Rodrigues formula thanks to unutbu.