Rotate image around specific point in the world coordinate space - python

I have an image in the world coordinate space, and found the position of a rotation center C of that image relative to the image coordinate system origin O(image top left corner) and now I want to rotate image to -Alpha to remove rotation and to standardize images in such way.
I have tried to use opencv getRotationMatrix2D but it just returns black images because I need to calculate translation in some complex way, currently my calculations not working. There are similar questions OpenCV Python rotate image by X degrees around specific point, Image rotation using OpenCV but they provides only solutions that works for rotation around image center and nothing works for the rotation image about arbitrary point. Maybe someone know way how to rotate image around arbitrary point in the world coordinate space using OpenCv or any other python library.
( Xc, Xc, Yc and R are known

Related

Finding the angle of rotation relative to a normal of a 2D image with TensorFlow

I am new to TF, so what is the general process of finding the angle of a 2D image? What I want to do is find the angle of rotation a particular object real-time.
Here's an example of what I am trying to achieve:
and every angle in between (to at least .1 angle).
I am assuming that I might need to detect the base image (angle=0) then compare any other rotated images to it?

Folium - Extracting an nxn image of mxm meters

I am using Folium on Python to extract maps. Given a coordinate, I want to extract an image of the mxm meters square around that coordinate. So, using pyproj, I project UTM to regular meters, create the mxm square and project back to UTM to get the coordinates of the bounding boxs corners.
Then, I've used fit_bounds with those corners to get my nxn picture. However, the output is still a rectangle. Sure, I can use Pillow to crop the image after the fact, but I need more control over how many meters that image is... And, right now I am not sure what I am actually getting.
What is the best way to extract a square image using Folium? Lets say I want to extract a map that gets the 100x100 meters area with coordinates (48.8584,2.2945) in the center.
What is the best approach to get this map?
I figured out how to control
OpenStreetMap has this wiki link with information regarding the different zoom levels.
To figure out how much of the real world is covered by a single pixel, formulas are provided. It is a function of the zoom level and the latitude at which the map is extracted.
s_pixel = C*cos(latitude)/(2**(zoomlevel + 8))

How to find the largest empty rectangle using OpenCV?

I need to find the coordinates of the largest empty rectangle in a PNG image. The rectangle should consist of light colors (if that is too difficult, white pixels only are fine) and should be axis-oriented.
I am new to computer vision and I found out about OpenCV, I am currently using the python interface to it and started tackling this problem with the SimpleBlobDetector interface, but it gives me only the center of the Blob with a certain radius.
Can anyone point me in the right direction for this?
EDIT: I need to do this with a regular colored PNG image, not a binary matrix
You can use a contour extractor, with the given point list you can check the size of the rectangle by checking the sizes of the lists, assuming that all the rectangles are parallel to the cardinal axis. If not you need to compute the distance of a pixel and the next for all the pixels in the contour list by using the x and y coordinates on each.

OpenCV: solvePnP tvec units and axes directions

I'm trying to find the relative position of the camera to the chessboard (or the other way around) - I feel OK with converting between different coordinate systems, e.g. as suggested here. I decided to use chessboard not only for calibration but actual position determination as well at this stage, since I can use the findChessboardCorners to get the imagePoints (and this works OK).
I've read a lot on this topic and feel that I understand the solvePnP outputs (even though I'm completely new to openCV and computer vision in general). Unfortunately, the results I get from solvePnP and physically measuring the test set-up are different: translation in z-direction is off by approx. 25%. x and y directions are completely wrong - several orders of magnitude and different direction than what I've read to be the camera coordinate system (x pointing up the image, y to the right, z away from the camera). The difference persists if I convert tvec and rvec to camera pose in world coordinates.
My questions are:
What are the directions of camera and world coordinate systems' axes?
Does solvePnP output the translation in the same units as I specify the objectPoints?
I specified the world origin as the first of the objectPoints (one of the chessboard corners). Is that OK and is tvec the translation to exactly that point from the camera coordinates?
This is my code (I attach it pro forma as it does not throw any exceptions etc.). I used grayscale images to get the camera intrinsics matrix and distortion coefficients during calibration so decided to perform localisation in grayscale as well. chessCoordinates is a list of chessboard points location in mm with respect to the origin (one of the corner points). camMatrix and distCoefficients come from calibration (performed using the same chessboard and objectPoints).
camCapture=cv2.VideoCapture(0) # Take a picture of the target to get the imagePoints
tempImg=camCapture.read()
imgPts=[]
tgtPts=[]
tempImg=cv2.cvtColor(tempImg[1], cv2.COLOR_BGR2GRAY)
found_all, corners = cv2.findChessboardCorners(tempImg, chessboardDim )
imgPts.append(corners.reshape(-1, 2))
tgtPts.append(np.array(chessCoordinates, dtype=np.float32))
retval,myRvec,myTvec=cv2.solvePnP(objectPoints=np.array(tgtPts), imagePoints=np.array(imgPts), cameraMatrix=camMatrix, distCoeffs=distCoefficients)
The camera coordinates are the same as image coordinates. So You have x axe pointing in the right side from the camera, y axe pointing down, and z pointing in the direction camera is faced. This is a clockwise axe system, and the same would apply to the chessboard, so if You specified the origin in, lets say, upper right corner of the chessboard, x axe goes along the longer side to the right and y along shorter side of the chessboard, z axe would be pointing downward, to the ground.
Solve PnP outputs the translation in the same units as the units in which You specified the length of chessboard fields, but it might also use units specified in camera calibration, as it uses the camera matrix.
Tvec points to the origin of the world coordinates in which You placed the calibration object. So if You placed the first object point in (0,0), thats where tvec will point to.
What are the directions of camera and world coordinate systems' axes?
The 0,0,0 corner on the boards is so that the X & Y axis are towards the rest of the corner points. The Z axis is always pointing away from the board. This means that it's usually pointing somewhat in the direction of the camera.
Does solvePnP output the translation in the same units as I specify the objectPoints?
Yes
I specified the world origin as the first of the objectPoints (one of the chessboard corners). Is that OK and is tvec the translation to exactly that point from the camera coordinates?
Yes, this is pretty common. In most of the cases, the first cam corner is set as 0,0,0 and subsequent corners being set at the z=0 plane (eg; (1,0,0) , (0,1,0), etc).
The tvec, combined with the rotation, points towards that point from the board coordinate frame toward the camera. In short; the tvec & rvec provide you with the inverse translation (world -> camera). With some basic geometry you can calculate the transformation that puts camera -> world.

Perform spherical projection of image in python

I am writing a program using PyGTK that displays a gtk.Image. The desktop is projected onto the inside of a spherical dome. If the image displayed is rectangular on the screen, once projected onto a sphere it gets distorted.
To help picture this: The desktop itself is square. The center pixel of the desktop projects to the zenith and a circle inscribed inside the square desktop becomes the horizon (0 degrees elevation in polar coordinates). Everything outside that (in the corners of the desktop) is not displayed.
I would like to somehow modify the gtk.Image such that it still appears rectangular on the spherical surface. I'm sure there are lots of details in how this projection could be done, but very simplistically I have to convert the rectangular image into a curved trapezoid. Converting to a range of polar coordinates (e.g., map this rectangle to the area between two azimuth and two elevation angles) would be a good first approximation, though you can imagine if the elevation angles are 0 and 90, the resulting image will be a wedge of the sphere and not look rectangular at all.
How can I apply transformations like this to a gtk.Image (or its underlying Pixbuf)? Is there a package already that can do this? If not, how should I go about writing it from scratch? Presumably I would have to pull out the pixel values, map them to some new grid, and replace the original image. I just don't want to reinvent something that has already been done.

Categories

Resources