i have this code of a code that i made for text detection
def drawRedRectangleAroundPlate(imgOriginalScene, licPlate):
p2fRectPoints = cv2.boxPoints(licPlate.rrLocationOfPlateInScene) # get 4 vertices of rotated rect
cv2.line(imgOriginalScene, tuple(p2fRectPoints[0]), tuple(p2fRectPoints[1]), SCALAR_RED, 2) # draw 4 red lines
cv2.line(imgOriginalScene, tuple(p2fRectPoints[1]), tuple(p2fRectPoints[2]), SCALAR_RED, 2)
cv2.line(imgOriginalScene, tuple(p2fRectPoints[2]), tuple(p2fRectPoints[3]), SCALAR_RED, 2)
cv2.line(imgOriginalScene, tuple(p2fRectPoints[3]), tuple(p2fRectPoints[0]), SCALAR_RED, 2)
# end function
and I have this trace back problem that I don't understand
cv2.line(imgOriginalScene, tuple(p2fRectPoints[0]), tuple(p2fRectPoints[1]), SCALAR_RED, 2) # draw 4 red lines
cv2.error: OpenCV(4.7.0) :-1: error: (-5:Bad argument) in function 'line'
> Overload resolution failed:
> - Can't parse 'pt1'. Sequence item with index 0 has a wrong type
> - Can't parse 'pt1'. Sequence item with index 0 has a wrong type
You wrote:
cv2.line(imgOriginalScene, tuple(p2fRectPoints[0]), tuple(p2fRectPoints[1]), SCALAR_RED, 2)
The documentation
explains that .line() accepts
void cv::line ( InputOutputArray img,
Point pt1,
Point pt2,
const Scalar & color,
... )
It appears that p2fRectPoints has four points, each with (x,y) coordinates.
So rather than using e.g. tuple(x, y) to construct a point from two integers,
it appears you can just pass in p2fRectPoints[0] as-is, with no
function call to tuple. You can verify this by examining type(p2fRectPoints[0]), which should be a Point (or tuple).
Related
I'm using opencv-contrib-python (4.5.4.60) to calibrate stereovision emulated by 2 pictures taken with one camera (for now I only have one of them) like there are two cameras for stereo depth estimation in future. I find intrinsic parameters of camera from several photos and trying to pass ChAruCo markers points from two photos into stereoCalibrate, but get assertion failed:
ret, M1, d1, M2, d2, R, T, E, F = cv2.stereoCalibrate(objpoints_L, imgpoints_L, imgpoints_R, camera_matrix, distortion_coefficients0, camera_matrix, distortion_coefficients0,img_r1.shape[:2], F = F)
cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\calibration.cpp:1088: error: (-215:Assertion failed) (count >= 4) || (count == 3 && useExtrinsicGuess) in function 'cvFindExtrinsicCameraParams2'
I have checked input type of object points and image points with cv2.utils.dumpInputArray()
InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=40 dims(-1)=2 size(-1)=1x40 type(-1)=CV_32FC3
InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=40 dims(-1)=2 size(-1)=1x40 type(-1)=CV_32FC2
InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=40 dims(-1)=2 size(-1)=1x40 type(-1)=CV_32FC2
sorted them so I pass only matching on both photos, but still get assertion failed and can't figure out what I'm doing wrong.
The problem was that ChAruCo markers are returned as array of objects with single point (n, 1, 2) by cv2.aruco.detectMarkers. Functions accept this points format (for example cv2.solvePnP or cv2.findFundamentalMat) but if you try to pass them into cv2.stereoCalibrate, which checks for every object to have more then 3 points but get n objects with single point instead, you would get assertion failed. To use this points you have to reshape array to single object with all ChAruCo points (1,n,2). Do the same reshape to (1, n, 3) to object points obtained by cv2.aruco.getBoardObjectAndImagePoints.
I have a project to detect distance between people. the project runs smoothly when the centroid and the plotting line is in the center of the people. But, i want to move the centroid and the plotting line to the feet of the people, I have successfully move the centroid but the plotting line is not move along the centroid. here's the code:
utils.py (def distancing)
def distancing(people_coords, img, dist_thres_lim=(200,250)):
# Plot lines connecting people
already_red = dict() # dictionary to store if a plotted rectangle has already been labelled as high risk
centers = []
for i in people_coords:
centers.append(((int(i[0])+int(i[2]))//2, (int(max(i[3]), (i[1])))))
for j in centers:
already_red[j] = 0
x_combs = list(itertools.combinations(people_coords,2))
radius = 10
thickness = 5
for x in x_combs:
xyxy1, xyxy2 = x[0],x[1]
cntr1 = ((int(xyxy1[2])+int(xyxy1[0]))//2,(int(xyxy1[3])+int(xyxy1[1]))//2)
cntr2 = ((int(xyxy2[2])+int(xyxy2[0]))//2,(int(xyxy2[3])+int(xyxy2[1]))//2)
dist = ((cntr2[0]-cntr1[0])**2 + (cntr2[1]-cntr1[1])**2)**0.5
the problem is in people_coords(looping) xy coordinate. I have tried to change the code with (int(max(i[3]), (i[1]))))) but when I runs it, I get an error (TypeError: iteration over a 0-d tensor). what should I do to move the plotting line along with the centroid?
here is the centroid code
def plot_dots_on_people(x, img):
# Plotting centers of people with green dot.
thickness = -1;
color = [0, 255, 0] # green
center = ((int(x[0])+int(x[2]))//2, (int(max(x[3], x[1]))))
radius = 10
cv2.circle(img, center, radius, color, thickness)
I hope someone can help me, Thank you.
I suggest printing the value of people_coords and running people_coords.size() . The error comes from iterating over a tensor that has a size of torch.Size([]). An example tensor with that size is like torch.tensor(5).
One way to solve this error is to make sure people_coords is the value you expect it to be by using the debugging steps I illustrated above or you can use unsqueeze to turn torch.tensor(x) to torch.tensor([x]) and thus make it iterable.
Sarthak Jain
I am writing a deep learning code that needs to detect very tiny faces. I found an implementation of tiny face detects paper in Github using TensorFlow. The code have to draw rectangles around faces in the image but the open cv cv2.rectangle() function giving TypeError. But I couldn't figure out exactly what is this error, searched the whole net found one or two issues talking about the argument being float is the problem.
This is my code to draw a rectangle in the given image:
def overlay_bounding_boxes(raw_img, refined_bboxes, lw):
"""Overlay bounding boxes of face on images.
Args:
raw_img:
A target image.
refined_bboxes:
Bounding boxes of detected faces.
lw:
The line width of bounding boxes. If zero specified,
this is determined based on the confidence of each detection.
Returns:
None.
"""
# Overlay bounding boxes on an image with the color based on the confidence.
for r in refined_bboxes:
_score = expit(r[4])
cm_idx = int(np.ceil(_score * 255))
rect_color = [int(np.ceil(x * 255)) for x in util.cm_data[cm_idx]] # parula
_lw = lw
if lw == 0: # line width of each bounding box is adaptively determined.
bw, bh = r[2] - r[0] + 1, r[3] - r[0] + 1
_lw = 1 if min(bw, bh) <= 20 else max(2, min(3, min(bh / 20, bw / 20)))
_lw = int(np.ceil(_lw * _score))
_r = [int(x) for x in r[:4]]
cv2.rectangle(raw_img, (_r[0], _r[1]), _r[2], _r[3]), rect_color, int(_lw))
The error it's giving is:
Traceback (most recent call last):
File "D:/PYTHON PROJECTS/Digital Attendance System (knn)/Tiny_Faces_in_Tensorflow-master/tiny_face_eval.py", line 242, in <module>
main()
File "D:/PYTHON PROJECTS/Digital Attendance System (knn)/Tiny_Faces_in_Tensorflow-master/tiny_face_eval.py", line 235, in main
evaluate(
File "D:/PYTHON PROJECTS/Digital Attendance System (knn)/Tiny_Faces_in_Tensorflow-master/tiny_face_eval.py", line 200, in evaluate
overlay_bounding_boxes(raw_img, refined_bboxes, lw)
File "D:/PYTHON PROJECTS/Digital Attendance System (knn)/Tiny_Faces_in_Tensorflow-master/tiny_face_eval.py", line 62, in overlay_bounding_boxes
cv2.rectangle(raw_img, (_r[0], _r[1]), (_r[2],_r[3]), rect_color, _lw)
TypeError: function takes exactly 4 arguments (2 given)
I checked every argument data type which is integer obviously as the solutions I've found issued by others that one of the arguments being float is the problem.
What is the actual problem here? Thanks in advance!
I got the solution.
I checked and the problem was with the coordinates even if it cast by int(x). I found another way to convert them to integer with np.round().astype() method.
So, i changed the for loop line to _r = [np.round(x).astype("int") for x in r[:4]]
That seems to solve my problem. Also, I found the solution in another StackOverflow answer
I want to calculate the relative transformation between two cameras ([R|t] matrix) using multiple frames of a charuco board. My idea was to obtain image-object point pairs from all the frames and then use a function which takes all of the detected point pairs and outputs relative transformation between cameras (e.g. stereoCalibrate).
What is the best approach to do that? I could not get stereoCalibrate to work, since it always throws assertion errors -> bugreport.
Current implementation (not working):
imagePointsA = []
imagePointsB = []
objectPoints = []
for frameA, frameB in color_framesets(...):
try:
# Find corners
cornersA, idsA, rejected = cv2.aruco.detectMarkers(frameA, charucoDict)
cornersB, idsB, rejected = cv2.aruco.detectMarkers(frameB, charucoDict)
if not cornersA or not cornersB: raise Exception("No markers detected")
retA, cornersA, idsA = cv2.aruco.interpolateCornersCharuco(cornersA, idsA, frameA, charucoBoard)
retB, cornersB, idsB = cv2.aruco.interpolateCornersCharuco(cornersB, idsB, frameB, charucoBoard)
if not retA or not retB: raise Exception("Can't interpolate corners")
# Find common points in both frames (is there a nicer way?)
objPtsA, imgPtsA = cv2.aruco.getBoardObjectAndImagePoints(charucoBoard, cornersA, idsA)
objPtsB, imgPtsB = cv2.aruco.getBoardObjectAndImagePoints(charucoBoard, cornersB, idsB)
# Create dictionary for each frame objectPoint:imagePoint
ptsA = {tuple(a):tuple(b) for a, b in zip(objPtsA[:,0], imgPtsA[:,0])}
ptsB = {tuple(a):tuple(b) for a, b in zip(objPtsB[:,0], imgPtsB[:,0])}
common = set(ptsA.keys()) & set(ptsB.keys()) # intersection between obj points
for objP in common:
objectPoints.append(np.reshape(objP, (1, 3)))
imagePointsA.append(np.reshape(ptsA[objP], (1, 2)))
imagePointsB.append(np.reshape(ptsB[objP], (1, 2)))
except Exception as e:
print(f"Skipped frame: {e}")
continue
result = cv2.stereoCalibrateExtended(objectPoints, imagePointsA, imagePointsB, intrA, distA, intrB, distB, (848, 480), flags=cv2.CALIB_FIX_INTRINSIC+cv2.CALIB_USE_EXTRINSIC_GUESS)
I have just made something similar earlier today. I assume you solved at least part of your issue, since you closed the mentioned bug. In any case, it seems to me that the issue is that you are passing an array of points, while it should be an array of array of points (an array of point for each frame with sufficient data).
On a related note, cv2.aruco.getBoardObjectAndImagePoints is probably not what you are looking for, with cornersA and cornersB already being image points (of chessboard pattern corners), and object points (position of the chessboard pattern corners) being computable from the aruco marker ids, while getBoardObjectAndImagePoints is about the aruco marker corners as far as I can tell.
Internally, cv2.aruco.calibrateCameraCharuco simply calls cv2.calibrateCamera with the passed corners as image points, and the object points computed from the passed aruco IDs. Unfortunately, getting the object point from an aruco ID isn't exposed in the API, but it's pretty easy to compute: https://github.com/opencv/opencv_contrib/blob/master/modules/aruco/src/charuco.cpp#L157-L166
I am not sure what is going on.
This thing worked before and now it dosn't.
def generate_unity_texture(self, monKey_data):
width, height = self.texture_image.size
eye_position = (int(0.88 * width), int(0.88 * height))
ImageDraw.floodfill(unity_texture_copy, xy=eye_position , value=monKey_data['eye_color'])
return unity_texture_copy
and I get SystemError: new style getargs format but argument is not a tuple.
The problem is in pixel[x, y] = value and it is mentioned that the problamatic line of code is PIL\ImageDraw.py", line 349, in floodfill
This is a tuple. I even tried to replace middle with (int(middle[0]), int(middle[1]) but got the same result.
In the documentation xy is:
xy – Seed position (a 2-item coordinate tuple).
Which is exactly what I did.
Any suggestions?