Output of projectPoints() function - python

i used projectPoints() function of OpenCV to do projection from world coordinates to pixel coordinates, the output image points are vector point2f, how can I extract the x,y coordinates from imagePoints?
seconde question, some of the imagePoints are negative like below: I just project 2 points and these are the results
[[[-37.95361728 316.5438248 ]]
[[204.89090594 316.5144533 ]]]
if I show these coordinates without the negative sign on the image it is correct
first why i get a negative sign and how can i solve these issues?
i appreciate any help, thanks
this is my code :
import cv2
import numpy as np
objectPoints = np.array([[ -0.8565132125748637 , 0.18200966481269648 , 0.9606457931958912 ],[-0.2565132125748638 , 0.18200966481269648 , 0.9606457931958912]] , np.float)
tvec = np.matrix([[-0.00016514 ],[ 0.00523247 ], [-0.00371881]])
rvec = np.matrix([[0.99987256 , -0.00294761 , -0.01569025],[0.00261951 , 0.99977833 , -0.02089080],[0.01574835 , 0.02084704 , 0.99965864]])
cameraMatrix = np.matrix([[ 381.58892822265625 , 0 , 313.5216979980469 ],[ 0, 381.1356201171875 , 250.1746826171875],[ 0 , 0 , 1 ]])
distCoeffs = np.array([0, 0, 0, 0, 0], np.float)
imagePoints, jacobian = cv2.projectPoints( objectPoints, rvec, tvec, cameraMatrix, distCoeffs )
print(imagePoints)

Related

Python - partially cover up image

I need to black out part of the image. to do so I tried this code:
img2[0:0, 640:150] = [0, 0, 0]
img2[0:490, 640:640] = [0, 0, 0]
but it does not seem to be working. The image is a numpy array.
So my questions are:
why does my image img2 look the same before and after the execution of these rows?
I need to black out everything except a rectangle. i wanted to do so by drawing 4 rectangles on the outside. can this also be done by saying one time what I do NOT want to blacken? so basically the inverse of the range?
I think, You need to know about slicing (link_1, link_2). If you select correct slicing only one assignment with 0 is enough.
>>> img_arr = np.random.rand(5,3,3)
>>> img_arr[1:3, 0:2, 0:3] = 0
# Or
>>> img_arr[1:3, :2, :] = 0
>>> img_arr
array([[[0.19946098, 0.42062458, 0.51795564],
[0.0957362 , 0.26306843, 0.24824746],
[0.63398966, 0.44752899, 0.37449257]],
[[0. , 0. , 0. ],
[0. , 0. , 0. ],
[0.49413734, 0.07294475, 0.8341346 ]],
[[0. , 0. , 0. ],
[0. , 0. , 0. ],
[0.18410631, 0.77498275, 0.42724167]],
[[0.60114116, 0.73999382, 0.76348436],
[0.49114468, 0.18131404, 0.01817003],
[0.51479338, 0.41674903, 0.80151682]],
[[0.67634706, 0.56007131, 0.68486408],
[0.35607505, 0.51342861, 0.75062432],
[0.44943936, 0.10768226, 0.62945455]]])

Why do triangulated points not project back to same image points in OpenCV?

I have two corresponding image points (2D) visualized by the same camera with intrinsic matrix K each coming from different camera poses (R1, t1, R2, t2). If I triangulate the corresponding image points to a 3D point and then reproject it back to the original cameras it only closely matches the original image point in the first camera. Can someone help me understand why? Here is a minimal example showing the issue:
import cv2
import numpy as np
# Set up two cameras near each other
K = np.array([
[718.856 , 0. , 607.1928],
[ 0. , 718.856 , 185.2157],
[ 0. , 0. , 1. ],
])
R1 = np.array([
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]
])
R2 = np.array([
[ 0.99999183 ,-0.00280829 ,-0.00290702],
[ 0.0028008 , 0.99999276, -0.00257697],
[ 0.00291424 , 0.00256881 , 0.99999245]
])
t1 = np.array([[0.], [0.], [0.]])
t2 = np.array([[-0.02182627], [ 0.00733316], [ 0.99973488]])
P1 = np.hstack([R1.T, -R1.T.dot(t1)])
P2 = np.hstack([R2.T, -R2.T.dot(t2)])
P1 = K.dot(P1)
P2 = K.dot(P2)
# Corresponding image points
imagePoint1 = np.array([371.91915894, 221.53485107])
imagePoint2 = np.array([368.26071167, 224.86262512])
# Triangulate
point3D = cv2.triangulatePoints(P1, P2, imagePoint1, imagePoint2).T
point3D = point3D[:, :3] / point3D[:, 3:4]
print(point3D)
# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1)
rvec2, _ = cv2.Rodrigues(R2)
p1, _ = cv2.projectPoints(point3D, rvec1, t1, K, distCoeffs=None)
p2, _ = cv2.projectPoints(point3D, rvec2, t2, K, distCoeffs=None)
# measure difference between original image point and reporjected image point
reprojection_error1 = np.linalg.norm(imagePoint1 - p1[0, :])
reprojection_error2 = np.linalg.norm(imagePoint2 - p2[0, :])
print(reprojection_error1, reprojection_error2)
The reprojection error in the first camera is always good (< 1px) but the second one is always large.
Remember how you're constructing the projection matrix with the transpose of the rotation matrix combined with the negative of the translation vector. You must do the same thing when you're putting this into cv2.projectPoints.
Therefore, take the transpose of the rotation matrix and put it into cv2.Rodrigues. Finally, supply the negative of the translation vector into cv2.projectPoints:
# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1.T) # Change
rvec2, _ = cv2.Rodrigues(R2.T) # Change
p1, _ = cv2.projectPoints(point3D, rvec1, -t1, K, distCoeffs=None) # Change
p2, _ = cv2.projectPoints(point3D, rvec2, -t2, K, distCoeffs=None) # Change
Doing this we now get:
[[-12.19064 1.8813655 37.24711708]]
0.009565768222768252 0.08597237597736622
To be absolutely sure, here are the relevant variables:
In [32]: p1
Out[32]: array([[[371.91782052, 221.5253794 ]]])
In [33]: p2
Out[33]: array([[[368.3204979 , 224.92440583]]])
In [34]: imagePoint1
Out[34]: array([371.91915894, 221.53485107])
In [35]: imagePoint2
Out[35]: array([368.26071167, 224.86262512])
We can see that the first few significant digits match and we expect that there is a slight loss in precision due to this being a least-squares solve to where the points triangulate to.

find Camera Center with opencv

I'm trying to get the camera center from a calibrated camera.
I have 4 measured 3D objectPoints and its images and trying to get the center (translation) from the projective matrix with no acceptable results.
Any advise regarding the accuracy I should expect with opencv? Should I increase the number of points?
These are the results I got:
TrueCenter in mm for XYZ
[[4680.]
[5180.]
[1621.]]
Center
[[-2508.791]
[ 6015.98 ]
[-1096.674]]
import numpy as np
import cv2
from scipy.linalg import inv
TrueCameraCenter = np.array([4680., 5180, 1621]).reshape(-1,1)
objectPoints = np.array(
[[ 0., 5783., 1970.],
[ 0., 5750., 1261.],
[ 0., 6412., 1968.],
[1017., 9809., 1547.]], dtype=np.float32)
imagePoints=np.array(
[[ 833.75, 1097.25],
[ 798. , 1592.25],
[1323. , 1133.5 ],
[3425.5 , 1495.5 ]], dtype=np.float32)
cameraMatrix= np.array(
[[3115.104, -7.3 , 2027.605],
[ 0. , 3077.283, 1504.034],
[ 0. , 0. , 1. ]])
retval, rvec, tvec = cv2.solvePnP(objectPoints, imagePoints,cameraMatrix,None, None, None, False, cv2.SOLVEPNP_ITERATIVE)
R,jac= cv2.Rodrigues(rvec)
imagePoints2,jac= cv2.projectPoints(objectPoints, rvec, tvec, cameraMatrix,None)
print('TrueCenter in mm for XYZ\n', TrueCameraCenter, '\nCenter\n', -inv(R).dot(tvec))
I've found this interesting presentation regarding the Location Determination Problem by Bill Wolfe. Perspective View Of 3 Points
So, using 4 non-coplanar points (non 3 colinear) the solution improved.
import numpy as np
import cv2
from scipy.linalg import inv,norm
TrueCameraCenter = np.array([4680., 5180, 1621])
objectPoints = np.array(
[[ 0., 5783., 1970.],
[ 0., 5750., 1261.],
[ 0., 6412., 1968.],
[ 0., 6449., 1288.]])
imagePoints=np.array(
[[ 497.5 , 674.75],
[ 523.75, 1272.5 ],
[1087.75, 696.75],
[1120. , 1212.5 ]])
cameraMatrix= np.array(
[[3189.096, 0. , 2064.431],
[ 0. , 3177.615, 1482.859],
[ 0. , 0. , 1. ]])
dist_coefs=np.array([[ 0.232, -1.215, -0.002, 0.011, 1.268]])
retval, rvec, tvec = cv2.solvePnP(objectPoints, imagePoints,cameraMatrix,dist_coefs,
None, None, False, cv2.SOLVEPNP_ITERATIVE)
R,_= cv2.Rodrigues(rvec)
C=-inv(R).dot(tvec).flatten()
print('TrueCenter in mm for XYZ\n', TrueCameraCenter, '\nCenter\n',C.astype(int) )
print('Distance:', int(norm(TrueCameraCenter-C)))

Deblur an image using scikit-image

I am trying to use skimage.restoration.wiener, but I always end up with an image with a bunch of 1 (or -1), what am I doing wrong? The original image comes from Uni of Waterloo.
import numpy as np
from scipy.misc import imread
from skimage import color, data, restoration
from scipy.signal import convolve2d as conv2
def main():
image = imread("/Users/gsamaras/Downloads/boat.tif")
psf = np.ones((5, 5)) / 25
image = conv2(image, psf, 'same')
image += 0.1 * image.std() * np.random.standard_normal(image.shape)
deconvolved = restoration.wiener(image, psf, 0.00001)
print deconvolved
print image
if __name__ == "__main__":
main()
Output:
[[ 1. -1. 1. ..., 1. -1. -1.]
[-1. -1. 1. ..., -1. 1. 1.]
[ 1. 1. 1. ..., 1. 1. 1.]
...,
[ 1. 1. 1. ..., 1. -1. 1.]
[ 1. 1. 1. ..., -1. 1. -1.]
[ 1. 1. 1. ..., -1. 1. 1.]]
[[ 62.73526298 77.84202199 94.1563234 ..., 85.12442365
69.80579057 48.74330501]
[ 74.79638704 101.6248559 143.09978769 ..., 100.07197414
94.34431216 59.72199141]
[ 96.41589893 132.53865314 161.8286996 ..., 137.17602535
117.72691238 80.38638741]
...,
[ 82.87641732 122.23168689 146.14129645 ..., 102.01214025
75.03217549 59.78417916]
[ 74.25240964 100.64285679 127.38475015 ..., 88.04694654
66.34568789 46.72457454]
[ 42.53382524 79.48377311 88.65000364 ..., 50.84624022
36.45044106 33.22771889]]
And I tried several values. What am I missing?
My best so far solution is:
import numpy as np
#import matplotlib.pyplot as plt
from scipy.misc import imfilter, imread
from skimage import color, data, restoration
from scipy.signal import convolve2d as conv2
def main():
image = imread("/Users/gsamaras/Downloads/boat.tif")
#plt.imshow(arr, cmap='gray')
#plt.show()
#blurred_arr = imfilter(arr, "blur")
psf = np.ones((5, 5)) / 25
image = conv2(image, psf, 'same')
image += 0.1 * image.std() * np.random.standard_normal(image.shape)
deconvolved = restoration.wiener(image, psf, 1, clip=False)
#print deconvolved
plt.imshow(deconvolved, cmap='gray')
plt.show()
#print image
if __name__ == "__main__":
main()
Much smaller values in restoration.wiener() lead to images that appear like you have put a non-transparent overlay above it (like this). On the other hand as this value grows the image blurs more and more. A value near 1 seems to work best and deblur the image.
Worthnoting is the fact that the smaller this value (I mean the balance, the greater the image size is.
PS - I am open to new answers.
The solution to the 1s problem is to either use clip = False or to convert the data to be on a [0,1] scale.

index 2d numpy.array with 2d numpy.array

I have an N-by-2 numpy array of 2d coordinates named coords, and another 2d numpy array named plane. What I want to do is like
for x,y in coords:
plane[x,y] = 0
but without for loop to improve efficiency. How to do this with vectorized code? Which function or method in numpy to use?
You can try plane[coords.T[0], coords.T[1]] = 0 Not sure this is what you want. For example:
Let,
plane = np.random.random((5,5))
coords = np.array([ [2,3], [1,2], [1,3] ])
Then,
plane[coords.T[0], coords.T[1]] = 0
will give:
array([[ 0.41981685, 0.4584495 , 0.47734686, 0.23959934, 0.82641475],
[ 0.64888387, 0.44788871, 0. , 0. , 0.298522 ],
[ 0.22764842, 0.06700281, 0.04856316, 0. , 0.70494825],
[ 0.18404081, 0.27090759, 0.23387404, 0.02314846, 0.3712009 ],
[ 0.28215705, 0.12886813, 0.62971 , 0.9059715 , 0.74247202]])

Categories

Resources