What do negative homogeneous coordinates signify? Why are they not useful in bi-linear interpolation of image A onto the plane of another image B?
Homogeneous coordinates can be calculated as follows:
But I am unable to understand what do negative homogeneous coordinates mean!?
During the interpolation process, I tried to use both negative and positive homogeneous coordinates and then only positive homogeneous coordinates. With positive homogeneous coordinates, projection worked as expected but with negative and positive projection didn't work well and the image that was being projected onto another image occupied the whole part in the output image.
Related
I know the x, y coordinates of the blue dots shown in the image. How can I calculate the area of an irregular shape bounded by these points? Number of points on upper and lower surfaces are different
Image
As suggested by #Fractalism, You can use Schoelace formula.
Here is an implementation for python using Numpy. I took it from here:
import numpy as np
# x,y are arrays containing coordinates of the polygon vertices
x=np.array([3,5,12,9,5])
y=np.array([4,11,8,5,6])
i=np.arange(len(x))
#Area=np.sum(x[i-1]*y[i]-x[i]*y[i-1])*0.5 # signed area, positive if the vertex sequence is counterclockwise
Area=np.abs(np.sum(x[i-1]*y[i]-x[i]*y[i-1])*0.5) # one line of code for the shoelace formula
The corner points of a rectangle are given in 2D coordinates. I also know the real distance of the points and I have the camera matrix.
Now I want to find the rotation vector with respect to the camera, but without using the cv2.calibrateCamera() method with the chessboard corners.
Given an image mask, I want to project the pixels onto a mesh in respect to the position and orientation of the camera and convert these pixels into a pointcloud. I have the intrinsic and extrinsic parameters of the camera in respect to the world, and the location of the mesh in world coordinates. I know the mapping from world coordinates to camera image is as follow:
imgpoint = Intrinsic * Extrinsic * worldpoint
So when I want to the opposite i do the inverse of the intrinsic and extrinsic matrices:
worldpoint= Intrinsic^(-1) * Extrinsic^(-1) * imgpoint
However, the idea that I had was to obtain two points from one pixel, with different depth values, to obtain a line and then look for the closest intersection for the mesh I want with the line, but I do not know how to properly generate a point away from the original camera plane. How can I find this extra point and/or am I complicating this problem?
The top equation below shows how to project a point (x,y,z) onto a pixel (u,v);
The extrinsic parameters are the 3x3 rotation matrix R and translation t.
The intrinsic parameters are the focal distances f_x, f_y and
principal point (c_x, c_y). The value alpha is the perspective foreshortening term that is divided out.
The bottom equation reverses the process by describing how to project
a ray from the camera position through through the pixel (u,v) out into the scene as the parameter alpha varies from 0 to infinity.
Now we have converted the problem into a ray casting problem.
Find the intersection of the ray with your mesh which is a
standard computer graphics problem.
I'm looking for rectangular areas in a non-negative two-dimensional float array in numpy where the distance to the center point of the area is less than x. In fact, the purpose of data analysis is the output of a depth estimation function in which I specify the areas that are less distant from each other (which can be said, for example, to be part of a wall or objects that are vertical and facing the camera).
For example, in the image below, the output of the depth estimation function can be seen, where each pixel represents a distance between 0 and 500 cm. In any area where the difference in size is less than a value indicates that the object is in a vertical position and I am looking for these areas
https://drive.google.com/file/d/1Z2Bsi5ZNoo4pFU6N188leq56vGHFfvcd/view?usp=sharing
The code I am working on is related to MiDas, at the end of which I have added my code, which is in the following link
https://colab.research.google.com/drive/1vsfukFqOOZZjTajySM8hL0VNCvOxaa6X?usp=sharing
Now, for example, I'm looking for areas like this paper that are stuck behind a chair in the picture below
https://drive.google.com/file/d/1ui99gpU2i0JFumLivLpoEyHn3QTcfD8a/view?usp=sharing
I am writing function that applies affine transformation to the input image.My function, first finds the six affine transformation parameters with size is 6x1.The function then applies these parameters to all image coordinates.The new coordinates I obtained have a float value. To create a new image from these coordinates, I converted the new coordinates to integers. I made the color assignment for the new image as follows:
Let's say input image coordinate is equal (i,j) and digital number value is this pixsel is equal (0,0,0) and (i,j) refers to (m,k) in the output image.Then,I say digital value of (m,k) is equal (0,0,0).
I searched forward warping and I didn't understand a topic.As I said before,I converted the new coordinates float to integers.Can this be done in forward warping?
Please help me...